Skip to main content
Glama

diff_results

Compare before and after web accessibility scans to verify fixes, identify regressions, and track resolved or added penalties with severity changes per target.

Instructions

Compare two Tactual analysis results (before/after). Shows what improved, regressed, which penalties were resolved or added, and severity band changes per target. Returns a JSON array of {targetId, baselineScore, candidateScore, status, penalties}.

Read-only, no side effects. Use after fixing accessibility issues to verify improvements. Both inputs must be JSON strings from analyze_url (format='json'). Not useful for SARIF output — use analyze_url directly for before/after SARIF comparisons.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baselineYesBaseline analysis result as JSON string
candidateYesCandidate analysis result as JSON string
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses safety properties ('Read-only, no side effects'), compensates for the missing output schema by documenting the return structure ('Returns a JSON array of {targetId...}'), and explains the comparison logic ('Shows what improved, regressed...'). Does not mention error handling for malformed inputs, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Seven sentences deliver high information density with zero redundancy. The structure is front-loaded (purpose first), followed by behavioral details, return format, usage timing, input constraints, and sibling differentiation. Every sentence earns its place with no filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (2 string parameters, comparison logic) and absence of both annotations and output schema, the description achieves completeness by covering safety profile, return value structure, input provenance, and sibling relationships. Nothing critical is missing for an agent to invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% coverage ('Baseline analysis result as JSON string'), the description adds crucial provenance context that the schema lacks: specifying that inputs must come from 'analyze_url (format=json)'. This source constraint is vital for correct usage and elevates the score above the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Compare') and resource ('Tactual analysis results'), including the before/after context. It clearly distinguishes itself from sibling analyze_url by explicitly stating 'Not useful for SARIF output — use analyze_url directly for before/after SARIF comparisons,' establishing a clear boundary between the tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal guidance ('Use after fixing accessibility issues to verify improvements'), input requirements ('Both inputs must be JSON strings from analyze_url'), and a clear alternative for excluded use cases ('use analyze_url directly'). This covers when-to-use, when-not-to-use, and prerequisites comprehensively.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tactual-dev/tactual'

If you have feedback or need assistance with the MCP directory API, please join our Discord server