Skip to main content
Glama

a11y-scorer

Server Details

Cloudflare Workers MCP server: a11y-scorer

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a specific, well-defined accessibility check: raw HTML audit, URL audit, color contrast, and heading structure. No overlap in purpose.

Naming Consistency4/5

All tools use the 'a11y_' prefix, but the naming pattern varies between verb_noun (audit_html, audit_url) and noun-only (contrast, headin...). Minor inconsistency, but clear overall.

Tool Count5/5

With 4 tools, the server is tightly scoped for basic accessibility auditing. Each tool earns its place without being excessive or insufficient.

Completeness4/5

Covers common accessibility checks (audit, contrast, headings) but lacks tools for alt text, keyboard accessibility, or ARIA validation. Minor gaps for a comprehensive audit.

Available Tools

4 tools
a11y_audit_htmlBInspect

Audit raw HTML for WCAG 2.1 AA issues.

ParametersJSON Schema
NameRequiredDescriptionDefault
htmlYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only mentions 'audit', which implies a read-only analysis. It does not disclose potential side effects, resource usage, or output behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words or repetition. It is highly concise and front-loaded with the key action and target.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is minimally adequate but lacks details on return format, severity reporting, or any constraints. An agent might need more context for correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no descriptions (0% coverage). The description adds 'raw HTML', clarifying the format of the single string parameter, but does not specify length limits, encoding, or required structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'audit', the resource 'raw HTML', and the specific standard 'WCAG 2.1 AA'. It distinguishes from sibling tools like a11y_audit_url (which audits a URL) and a11y_contrast (contrast checking).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it specify prerequisites or when not to use it. It only states what it does without context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

a11y_audit_urlAInspect

Fetch a URL and return a WCAG 2.1 AA heuristic audit.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only says 'Fetch' and 'audit' without indicating whether it is read-only, any side effects, timeouts, or prerequisites. Critical information like non-destructive nature or network behavior is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single 12-word sentence conveys the core purpose without any unnecessary words. It is front-loaded with the verb 'Fetch' and resource, making it efficient for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should explain what the audit returns (format, structure, or error handling). It only states 'return a heuristic audit' without describing the output, leaving the agent with incomplete expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, but the description clarifies the sole parameter 'url' as the URL to fetch. This adds essential meaning beyond the schema's bare type declaration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches a URL and returns a WCAG 2.1 AA heuristic audit, naming both the specific resource (URL) and action (audit). It implicitly distinguishes from siblings like a11y_audit_html (HTML input) and a11y_contrast (contrast checking).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings. The context implies it is for auditing live URLs, but there is no mention of alternatives or exclusions like when to use a11y_audit_html instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

a11y_contrastCInspect

Evaluate WCAG color contrast for a foreground/background pair.

ParametersJSON Schema
NameRequiredDescriptionDefault
bgYes
fgYes
boldNo
fontSizeNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as mutability, output format, or required permissions. It only states the evaluation, leaving the agent uncertain about side effects or return value structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, straightforward sentence with no extraneous information. Front-loaded with the action verb and key resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description does not explain return values, parameter details, or edge cases. It is insufficient for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. The description mentions 'foreground/background pair', which covers 'fg' and 'bg', but does not explain 'bold' and 'fontSize' parameters. Two of four parameters are left undefined.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action (evaluate), the resource (WCAG color contrast), and the input (foreground/background pair). It distinguishes from sibling tools like a11y_audit_html and a11y_audit_url, which focus on different accessibility audits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives, no usage context, exclusions, or prerequisites. It simply states what the tool does without any decision-making support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

a11y_headingsBInspect

Extract heading outline from HTML.

ParametersJSON Schema
NameRequiredDescriptionDefault
htmlYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavior. It only states the basic action without disclosing what the outline contains (e.g., heading levels, text), whether it handles malformed HTML, or any security considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that immediately conveys the tool's purpose. It is front-loaded and contains no unnecessary words, though it could benefit from slightly more structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one parameter, no output schema), the description should at least indicate what the returned outline looks like. It does not mention heading levels, text content, or ordering, leaving the agent without enough context to fully understand the tool's output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter 'html' with no description, and schema description coverage is 0%. The description does not explain expected format (e.g., full document vs fragment), encoding, or constraints, failing to add meaningful context beyond the parameter name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Extract' and the resource 'heading outline from HTML'. It is specific and distinguishes from sibling tools like a11y_audit_html or a11y_contrast, which focus on full audits or contrast analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. However, the description implies its purpose—extracting headings—which helps infer that it is for structural analysis, not full accessibility audits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources