Skip to main content
Glama

Server Quality Checklist

100%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose: analyze_url (single page), analyze_pages (site-level aggregation), trace_path (specific navigation debugging), diff_results (comparative analysis), suggest_remediations (prioritized fixes), save_auth (authentication setup), and list_profiles (configuration discovery). No overlaps in functionality despite some tools working with the same data formats.

    Naming Consistency5/5

    All 7 tools follow a consistent snake_case verb_noun pattern (analyze_url, analyze_pages, trace_path, diff_results, suggest_remediations, save_auth, list_profiles). The naming convention is predictable throughout with no mixing of styles or vague verbs.

    Tool Count5/5

    Seven tools is an ideal scope for this domain—covering authentication setup, profile configuration, single/multi-page analysis, and three distinct post-processing utilities (diff, remediations, path tracing) without bloat. Each tool earns its place in the accessibility auditing workflow.

    Completeness4/5

    Covers the full analysis lifecycle well: config discovery, auth handling, various analysis modes, and result introspection tools. Minor gaps exist—no delete_auth or list_auth for session management, and no get_profile for individual profile details—but core accessibility auditing workflows are fully supported.

  • Average 4.6/5 across 7 of 7 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.1.0

  • Tools from this server were used 10 times in the last 30 days.

  • This repository includes a glama.json configuration file.

  • This server provides 7 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and discloses key behavioral traits: read-only safety ('does not modify pages'), partial failure handling ('If a single URL fails...remaining URLs still complete'), session behavior ('single browser session'), and output characteristics ('~200 bytes per page').

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two tightly constructed paragraphs with zero waste. First paragraph establishes purpose, mechanism, and safety; second provides usage guidance, output expectations, and error handling. Every sentence conveys distinct information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the description effectively compensates for the missing output schema by detailing return size and structure, it fails to specify the domain of analysis (accessibility testing implied by 'AT profile' and 'sr' in default values, but never stated), leaving a significant contextual gap for an otherwise complex 6-parameter tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage (all 6 parameters documented), establishing a baseline of 3. The main description text does not add parameter-specific semantics (e.g., explaining 'AT profile ID' or providing URL format examples), but none is required given complete schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states the tool 'Analyze[s] multiple pages and produce[s] an aggregated site-level report' and distinguishes itself from the sibling analyze_url by noting it combines results into a 'site score with per-page breakdown' and runs 'in a single browser session.'

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit guidance: 'Use this instead of calling analyze_url repeatedly when you need a site-level assessment.' This clearly defines when to prefer this tool over its sibling alternative.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden effectively. It explicitly warns about file overwrites ('Overwrites the output file'), documents the headed browser launch, and details the interaction side effects (clicks, fills). Minor gap: doesn't describe failure modes if authentication steps fail.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Lengthy but appropriately so given the complexity of the 'steps' DSL. Well-structured with bold headers ('Side effects', 'Steps format') that allow scanning. The main purpose is front-loaded in the first sentence, and every section serves a distinct function (behavioral warning, integration guide, syntax specification).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for a tool lacking both annotations and output schema. Explains the output file's purpose (cookies/localStorage storage), format (JSON), and consumption pattern (pass to sibling tools). Could improve by describing error conditions or the structure of the saved storageState file.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage, the description adds critical value for the complex 'steps' parameter, which lacks structural constraints in the schema (just 'array of objects'). It defines the specific action types (click, fill, wait, waitForUrl), their formats, and provides concrete JSON examples—essential information not present in the structured schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with specific verbs ('Authenticate... and save') and clearly identifies the resource (web application session). It explicitly distinguishes this tool from siblings by stating it's 'Not needed for public pages' and listing the specific sibling tools (analyze_url, trace_path, analyze_pages) that consume its output.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance ('only use when content is behind authentication') and clear integration instructions ('Pass the output file path as `storageState` to analyze_url, trace_path, or analyze_pages'). The side effects section further clarifies behavioral boundaries.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses safety properties ('Read-only, no side effects'), compensates for the missing output schema by documenting the return structure ('Returns a JSON array of {targetId...}'), and explains the comparison logic ('Shows what improved, regressed...'). Does not mention error handling for malformed inputs, preventing a perfect score.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Seven sentences deliver high information density with zero redundancy. The structure is front-loaded (purpose first), followed by behavioral details, return format, usage timing, input constraints, and sibling differentiation. Every sentence earns its place with no filler text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the moderate complexity (2 string parameters, comparison logic) and absence of both annotations and output schema, the description achieves completeness by covering safety profile, return value structure, input provenance, and sibling relationships. Nothing critical is missing for an agent to invoke this tool correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the input schema has 100% coverage ('Baseline analysis result as JSON string'), the description adds crucial provenance context that the schema lacks: specifying that inputs must come from 'analyze_url (format=json)'. This source constraint is vital for correct usage and elevates the score above the baseline of 3.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Compare') and resource ('Tactual analysis results'), including the before/after context. It clearly distinguishes itself from sibling analyze_url by explicitly stating 'Not useful for SARIF output — use analyze_url directly for before/after SARIF comparisons,' establishing a clear boundary between the tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit temporal guidance ('Use after fixing accessibility issues to verify improvements'), input requirements ('Both inputs must be JSON strings from analyze_url'), and a clear alternative for excluded use cases ('use analyze_url directly'). This covers when-to-use, when-not-to-use, and prerequisites comprehensively.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Without annotations, the description carries the full burden and effectively discloses: read-only nature, idempotency ('static data'), parameter state ('no parameters'), and return structure ('array of {id, name, platform, description}'). It could improve by mentioning cache behavior or error conditions, but covers the critical safety and data properties.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with purpose first, followed by domain explanation, return format, behavioral properties, and usage workflow. Every sentence adds value—examples clarify AT profiles, the return structure documents the output schema absence, and the default profile note prevents redundant calls. No redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given zero parameters, no annotations, and no output schema, the description compensates comprehensively: it explains the domain (screen readers, platforms), data structure (navigation weights, vocabulary), return format (specific fields), and integration pattern (sibling tool usage). Complete for this complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has zero parameters (baseline 4). The description confirms 'no parameters' which aligns with the empty input schema, providing explicit confirmation that no arguments are expected or accepted.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('List') and resource ('assistive-technology (AT) profiles'), clarifies the domain ('available for scoring'), and distinguishes from siblings by contrasting discovery (this tool) versus analysis (analyze_url, trace_path, analyze_pages).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly describes the workflow ('Call once to discover valid profile IDs, then pass a profile ID to...') and names three specific sibling tools to use with the results. Also notes the default profile ('generic-mobile-web-sr-v0') for when this tool isn't used, providing clear when-to-use guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, but description effectively discloses: 'Read-only, no side effects' for safety, return format ('JSON array of {targetId...}'), uniqueness constraint ('top unique'), and ranking logic ('ranked by severity'). Lacks error handling details but compensates well for missing annotations/output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Five sentences, zero waste. Front-loaded with purpose and return value, followed by safety, usage context, exclusion case, and input constraint. Every sentence provides unique information not redundant with schema or annotations.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for a 2-parameter extraction tool. Compensates for lack of annotations by declaring read-only safety, and for lack of output schema by documenting return structure. Covers purpose, constraints, sibling relationships (analyze_url), and output format comprehensively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage (baseline 3). Description adds crucial provenance context that the 'analysis' parameter 'must be a JSON string from analyze_url (format='json')', which is essential for correct invocation. Could add detail about maxSuggestions behavior (e.g., if fewer exist than requested).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb 'Extract' with resource 'remediation suggestions from a Tactual analysis result' and operation 'ranked by severity'. Clearly distinguishes from SARIF workflows by stating redundancy, and explicitly links to sibling analyze_url as the required input source.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use ('Most useful with large JSON results where you want a prioritized shortlist'), when not to use ('For SARIF results... this tool is redundant'), and prerequisites ('Input must be a JSON string from analyze_url'). Names specific alternative handling (SARIF inline suggestions).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, but description explicitly states 'Read-only — navigates to the URL but does not modify the page' and notes that statesJson 'skips browser launch entirely'. Could improve by mentioning timeout/failure behavior, but covers primary side effects well.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Perfectly structured with front-loaded purpose ('Trace the exact...'), immediate return value disclosure, read-only warning, and bolded section for complex auth workflow. Every sentence serves distinct purpose; zero redundancy despite 9 parameters.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a complex 9-parameter tool with no output schema, description comprehensively covers return structure ('step-by-step actions... modeled announcements, cumulative cost'), auth scenarios, SPA handling (waitForSelector), and sibling tool relationships. Complete without being verbose.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds valuable workflow semantics explaining how target relates to analyze_url results (exact ID vs glob) and the specific statesJson workflow chain (analyze_url → extract → trace_path), which adds meaning beyond isolated parameter definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Trace' with resource 'screen-reader navigation path' and clearly distinguishes from sibling analyze_url by stating it explains 'why a target scored poorly' and should be used 'after analyze_url'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use ('after analyze_url to understand why a target scored poorly') and provides detailed workflow for auth-gated targets using statesJson from analyze_url with includeStates=true. Clearly positions tool in the analysis workflow chain.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, but description carries full burden excellently: discloses 'sandboxed browser', 'does not modify the page', performance characteristics ('~30-60s', '~4KB', '~500 bytes'), filtering behavior ('auto-filters to findings that need attention'), and hydration waiting for SPAs. Rich behavioral context beyond safety hints.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three paragraphs with zero waste: first states purpose+safety, second gives format recommendation with size rationale, third gives SPA-specific guidance. Markdown bolding highlights actionable sections. Every sentence provides unique value not present in structured fields.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a complex 17-parameter tool with no output schema, the description effectively covers critical usage patterns (SPA hydration, authentication workflow, output format selection, probe vs no-probe). Deduct one point because it doesn't explicitly define 'AT' (assistive technology) and leaves some advanced parameters (focus, excludeSelector) to schema alone, though schema coverage is complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage (baseline 3), but description adds significant semantic value: explains format selection trade-offs, waitForSelector hydration purpose, probe timing cost, includeStates integration with trace_path, and storageState workflow with save_auth. Elevates beyond schema with usage intent and integration context.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Analyze' with clear resource 'web page' and scope 'screen-reader navigation cost'. The singular 'web page' implicitly distinguishes from sibling 'analyze_pages' (batch), and the probe parameter description explicitly notes 'analyze_pages never probes', reinforcing the distinction.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit recommendations: format='sarif' for concise output vs JSON/markdown '100x larger'; waitForSelector for SPAs; probe for 'deep investigation, not for triage'; storageState workflow referencing 'save_auth' sibling; summaryOnly for 'quick page health checks'. Clear when-to-use guidance for each major parameter.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

tactual MCP server

Copy to your README.md:

Score Badge

tactual MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tactual-dev/tactual'

If you have feedback or need assistance with the MCP directory API, please join our Discord server