Skip to main content
Glama

review_adr

Analyze Architecture Decision Records for completeness, identify missing context, unconsidered alternatives, and optimistic consequences to improve decision quality.

Instructions

AI quality review of an ADR — scores completeness, flags missing context, unconsidered alternatives, and optimistic consequences

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
adr_idYesADR ID to review

Implementation Reference

  • adr.js:42-84 (handler)
    The implementation of the ADR review logic which calls the Claude API.
    export async function reviewADR(adr) {
      const stream = client.messages.stream({
        model: 'claude-opus-4-6',
        max_tokens: 2048,
        thinking: { type: 'adaptive' },
        system: `You are a senior software architect reviewing Architecture Decision Records for quality and completeness.
    Evaluate the ADR critically and return a JSON review with this exact structure:
    
    {
      "score": <integer 0-100>,
      "summary": "One-sentence overall assessment",
      "issues": [
        { "severity": "high|medium|low", "field": "context|decision|consequences|title", "message": "specific problem description" }
      ],
      "suggestions": [
        "Concrete, actionable improvement suggestion"
      ]
    }
    
    Evaluate against these criteria:
    - Context: Is the problem clearly stated? Are constraints and forces explained?
    - Decision: Are alternatives considered and rejected? Is the rationale explicit?
    - Consequences: Are both positive and negative outcomes listed? Are risks acknowledged?
    - Title: Does it capture the decision, not just the topic?`,
        messages: [
          {
            role: 'user',
            content: `Review this ADR for quality:\n\n# ${adr.title}\n\n## Context\n${adr.context}\n\n## Decision\n${adr.decision}\n\n## Consequences\n${adr.consequences}`,
          },
        ],
      });
    
      const response = await stream.finalMessage();
    
      for (const block of response.content) {
        if (block.type === 'text') {
          const match = block.text.match(/\{[\s\S]*\}/);
          if (match) return JSON.parse(match[0]);
        }
      }
    
      throw new Error('Failed to parse review JSON from AI response');
    }
  • index.js:157-192 (registration)
    Registration of the `review_adr` tool and its MCP handler.
    server.registerTool('review_adr', {
      description: 'AI quality review of an ADR — scores completeness, flags missing context, unconsidered alternatives, and optimistic consequences',
      inputSchema: {
        adr_id: z.number().describe('ADR ID to review'),
      },
    }, async ({ adr_id }) => {
      if (!process.env.ANTHROPIC_API_KEY) {
        throw new Error('ANTHROPIC_API_KEY is required for AI review');
      }
    
      const adr = getADR(adr_id);
      if (!adr) throw new Error(`ADR ${adr_id} not found`);
    
      const review = await reviewADR(adr);
    
      const severityIcon = { high: '🔴', medium: '🟡', low: '🟢' };
      const issueLines = (review.issues ?? [])
        .map(i => `${severityIcon[i.severity] ?? '•'} [${i.field}] ${i.message}`)
        .join('\n') || 'No issues found';
    
      const suggestionLines = (review.suggestions ?? [])
        .map((s, i) => `${i + 1}. ${s}`)
        .join('\n') || 'No suggestions';
    
      const output = `## ADR-${adr_id} Review — Score: ${review.score}/100
    
    **${review.summary}**
    
    ### Issues
    ${issueLines}
    
    ### Suggestions
    ${suggestionLines}`;
    
      return { content: [{ type: 'text', text: output }] };
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool performs an 'AI quality review' and lists evaluation criteria, but doesn't describe the output format, potential side effects, permissions required, or any limitations like rate limits. This leaves significant gaps for a tool that likely returns structured feedback.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('AI quality review of an ADR') and then specifies the evaluation aspects. There's no wasted verbiage, and every phrase adds value to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (AI-based quality assessment) and lack of annotations or output schema, the description is incomplete. It doesn't explain what the review output looks like (e.g., scores, flags, recommendations), how results are structured, or any behavioral nuances, which are critical for an agent to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents the 'adr_id' parameter. The description doesn't add any parameter-specific details beyond what the schema provides, such as format constraints or examples. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('AI quality review') and resource ('ADR'), and distinguishes its purpose from siblings by focusing on quality assessment rather than generation, retrieval, or linking. It specifies what the review evaluates: completeness, missing context, unconsidered alternatives, and optimistic consequences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives like 'check_stale_adrs' or 'search_decisions'. The description implies usage for quality assessment but doesn't specify prerequisites, timing, or exclusions, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wooxogh/adr-mcp-setup'

If you have feedback or need assistance with the MCP directory API, please join our Discord server