Skip to main content
Glama

compare_adr_progress

Validate implementation progress by comparing TODO tasks against architectural decisions and current environment status to ensure alignment.

Instructions

Compare TODO.md progress against ADRs and current environment to validate implementation status

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
todoPathNoPath to TODO.md file to analyzeTODO.md
adrDirectoryNoDirectory containing ADR filesdocs/adrs
projectPathNoPath to project root for environment analysis.
environmentNoTarget environment context for validation (auto-detect will infer from project structure)auto-detect
environmentConfigNoEnvironment-specific configuration and requirements
validationTypeNoType of validation to performfull
includeFileChecksNoInclude file existence and implementation checks
includeRuleValidationNoInclude architectural rule compliance validation
deepCodeAnalysisNoPerform deep code analysis to distinguish mock from production implementations
functionalValidationNoValidate that code actually functions according to ADR goals, not just exists
strictModeNoEnable strict validation mode with reality-check mechanisms against overconfident assessments
environmentValidationNoEnable environment-specific validation rules and checks

Implementation Reference

  • Schema definition and metadata registration for the compare_adr_progress tool in the central TOOL_CATALOG. Includes inputSchema with adrIds parameter, category 'adr', requiresAI true, and CE-MCP directive support.
    TOOL_CATALOG.set('compare_adr_progress', {
      name: 'compare_adr_progress',
      shortDescription: 'Compare ADR implementation progress',
      fullDescription: 'Compares ADR decisions against actual implementation to measure progress.',
      category: 'adr',
      complexity: 'moderate',
      tokenCost: { min: 2000, max: 4000 },
      hasCEMCPDirective: true, // Phase 4.3: Moderate tool - progress comparison
      relatedTools: ['analyze_adr_timeline', 'validate_all_adrs'],
      keywords: ['adr', 'compare', 'progress', 'implementation'],
      requiresAI: true,
      inputSchema: {
        type: 'object',
        properties: {
          adrIds: { type: 'array', items: { type: 'string' } },
        },
      },
    });
  • Registration of compare_adr_progress in AVAILABLE_TOOLS array used for AI planning and orchestration.
    const AVAILABLE_TOOLS = [
      'analyze_project_ecosystem',
      'generate_adrs_from_prd',
      'suggest_adrs',
      'analyze_content_security',
      'generate_rules',
      'generate_adr_todo',
      'compare_adr_progress',
      'manage_todo',
      'generate_deployment_guidance',
      'smart_score',
      'troubleshoot_guided_workflow',
      'smart_git_push',
      'generate_research_questions',
  • Helper reference in planningTools array for CE-MCP tool chain orchestration directives.
    const planningTools = [
      'analyze_project_ecosystem',
      'generate_adrs_from_prd',
      'suggest_adrs',
      'analyze_content_security',
      'generate_rules',
      'generate_adr_todo',
      'compare_adr_progress',
      'manage_todo',
      'generate_deployment_guidance',
      'smart_score',
      'troubleshoot_guided_workflow',
      'smart_git_push',
      'perform_research',
      'validate_rules',
    ];
  • Helper reference in server context tools list for LLM awareness.
      name: 'compare_adr_progress',
      description: 'Compare ADR implementation progress against requirements',
    },
  • Registration in TOOL_CAPABILITIES mapping with description for AI context in tool chain planning.
    compare_adr_progress: 'Validate TODO vs ADRs vs actual environment state',
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'validate implementation status' but doesn't describe what the tool actually does behaviorally—e.g., whether it performs read-only analysis, modifies files, requires specific permissions, has side effects, or returns structured output. For a complex tool with 12 parameters, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. It directly states what the tool does ('Compare TODO.md progress against ADRs and current environment to validate implementation status'), with zero waste or redundancy. Every word earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (12 parameters, nested objects, no output schema, and no annotations), the description is incomplete. It doesn't address what the tool returns, how validation is performed, error handling, or behavioral traits. For a validation tool with rich input schema but no output schema, more context is needed to guide effective use, leaving significant gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so parameters are well-documented in the schema itself. The description adds no specific parameter semantics beyond implying validation across TODO.md, ADRs, and environment. It doesn't explain how parameters interact or their practical use, but with high schema coverage, the baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare TODO.md progress against ADRs and current environment to validate implementation status.' It specifies the verb 'compare' and resources (TODO.md, ADRs, environment), making the action clear. However, it doesn't explicitly differentiate from sibling tools like 'validate_adr' or 'analyze_deployment_progress', which appear related but have distinct scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools like 'validate_adr', 'analyze_deployment_progress', and 'validate_all_adrs', there's no indication of how this tool's focus on TODO.md comparison differs or when it's preferred. The description implies usage for validation but lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server