Skip to main content
Glama

compare_adr_progress

Validate implementation status by comparing TODO.md progress against ADRs and the current environment to verify compliance and identify gaps.

Instructions

Compare TODO.md progress against ADRs and current environment to validate implementation status

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
todoPathNoPath to TODO.md file to analyzeTODO.md
adrDirectoryNoDirectory containing ADR filesdocs/adrs
projectPathNoPath to project root for environment analysis.
environmentNoTarget environment context for validation (auto-detect will infer from project structure)auto-detect
environmentConfigNoEnvironment-specific configuration and requirements
validationTypeNoType of validation to performfull
includeFileChecksNoInclude file existence and implementation checks
includeRuleValidationNoInclude architectural rule compliance validation
deepCodeAnalysisNoPerform deep code analysis to distinguish mock from production implementations
functionalValidationNoValidate that code actually functions according to ADR goals, not just exists
strictModeNoEnable strict validation mode with reality-check mechanisms against overconfident assessments
environmentValidationNoEnable environment-specific validation rules and checks
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosing behavior. It only states 'compare and validate', lacking details on side effects (e.g., whether it performs reads or writes), required permissions, or how validation works. Multiple boolean parameters hint at complex behavior, but the description remains silent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, front-loading the verb and resources. There is no wasted text, making it efficient for scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 12 parameters, many booleans with defaults, no output schema, and no annotations, the description is too brief. It does not explain the output format, validation results, or how to interpret the comparison. The complexity demands more completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so each parameter has a description in the input schema. The tool description adds no parameter-specific meaning beyond the generic purpose. Baseline of 3 is appropriate as the description does not compensate for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the verb 'Compare' and resources (TODO.md progress, ADRs, current environment) to validate implementation status. It differentiates from siblings by specifying three-way comparison, though it does not explicitly contrast with similar sibling tools like 'analyze_gaps' or 'validate_adr'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusions. The agent receives no context on preferred scenarios or tool selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server