Skip to main content
Glama

assess_coverage

Evaluate knowledge coverage completeness for project directories by analyzing documentation gaps. Provides a 0-100 score and actionable recommendations to improve knowledge retention.

Instructions

Evaluate knowledge coverage completeness for a project directory. Returns a 0-100 coverage score and actionable recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
directoryNo
domainNo
stale_daysNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the tool returns a coverage score and recommendations, it doesn't describe what 'coverage' means, how the evaluation works, whether it's read-only or has side effects, performance characteristics, or error conditions. For a tool with 3 parameters and no annotation coverage, this is insufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that efficiently state the tool's purpose and output. The first sentence clearly describes what the tool does, and the second sentence specifies the return format. There's no unnecessary verbiage or repetition, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters with 0% schema coverage and no annotations, but does have an output schema, the description is incomplete. While the output schema may document return values, the description doesn't adequately explain the tool's behavior, parameter usage, or relationship to sibling tools. For a coverage assessment tool in a complex memory/analysis system, more context about what 'coverage' means and how parameters affect the evaluation would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 3 parameters (directory, domain, stale_days), the description provides no information about what these parameters mean or how they affect the evaluation. It doesn't explain what 'directory' should contain, what 'domain' refers to, or what 'stale_days' controls. The description fails to compensate for the complete lack of parameter documentation in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Evaluate knowledge coverage completeness for a project directory' specifies the verb (evaluate) and resource (knowledge coverage for a project directory). It distinguishes from siblings like 'detect_gaps' or 'codebase_analyze' by focusing on coverage assessment rather than gap detection or code analysis. However, it doesn't explicitly differentiate from all potential alternatives like 'validate_memory' or 'checkpoint'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, appropriate contexts, or exclusions. With many sibling tools like 'detect_gaps', 'validate_memory', and 'codebase_analyze' that might overlap in functionality, the absence of usage guidelines leaves the agent without direction on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cdeust/Cortex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server