a11y-scorer
Server Details
Cloudflare Workers MCP server: a11y-scorer
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 4 of 4 tools scored.
Each tool targets a specific, well-defined accessibility check: raw HTML audit, URL audit, color contrast, and heading structure. No overlap in purpose.
All tools use the 'a11y_' prefix, but the naming pattern varies between verb_noun (audit_html, audit_url) and noun-only (contrast, headin...). Minor inconsistency, but clear overall.
With 4 tools, the server is tightly scoped for basic accessibility auditing. Each tool earns its place without being excessive or insufficient.
Covers common accessibility checks (audit, contrast, headings) but lacks tools for alt text, keyboard accessibility, or ARIA validation. Minor gaps for a comprehensive audit.
Available Tools
4 toolsa11y_audit_htmlBInspect
Audit raw HTML for WCAG 2.1 AA issues.
| Name | Required | Description | Default |
|---|---|---|---|
| html | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but only mentions 'audit', which implies a read-only analysis. It does not disclose potential side effects, resource usage, or output behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words or repetition. It is highly concise and front-loaded with the key action and target.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description is minimally adequate but lacks details on return format, severity reporting, or any constraints. An agent might need more context for correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no descriptions (0% coverage). The description adds 'raw HTML', clarifying the format of the single string parameter, but does not specify length limits, encoding, or required structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'audit', the resource 'raw HTML', and the specific standard 'WCAG 2.1 AA'. It distinguishes from sibling tools like a11y_audit_url (which audits a URL) and a11y_contrast (contrast checking).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it specify prerequisites or when not to use it. It only states what it does without context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
a11y_audit_urlAInspect
Fetch a URL and return a WCAG 2.1 AA heuristic audit.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only says 'Fetch' and 'audit' without indicating whether it is read-only, any side effects, timeouts, or prerequisites. Critical information like non-destructive nature or network behavior is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single 12-word sentence conveys the core purpose without any unnecessary words. It is front-loaded with the verb 'Fetch' and resource, making it efficient for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should explain what the audit returns (format, structure, or error handling). It only states 'return a heuristic audit' without describing the output, leaving the agent with incomplete expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, but the description clarifies the sole parameter 'url' as the URL to fetch. This adds essential meaning beyond the schema's bare type declaration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches a URL and returns a WCAG 2.1 AA heuristic audit, naming both the specific resource (URL) and action (audit). It implicitly distinguishes from siblings like a11y_audit_html (HTML input) and a11y_contrast (contrast checking).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus siblings. The context implies it is for auditing live URLs, but there is no mention of alternatives or exclusions like when to use a11y_audit_html instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
a11y_contrastCInspect
Evaluate WCAG color contrast for a foreground/background pair.
| Name | Required | Description | Default |
|---|---|---|---|
| bg | Yes | ||
| fg | Yes | ||
| bold | No | ||
| fontSize | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits such as mutability, output format, or required permissions. It only states the evaluation, leaving the agent uncertain about side effects or return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, straightforward sentence with no extraneous information. Front-loaded with the action verb and key resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description does not explain return values, parameter details, or edge cases. It is insufficient for an agent to fully understand the tool's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%. The description mentions 'foreground/background pair', which covers 'fg' and 'bg', but does not explain 'bold' and 'fontSize' parameters. Two of four parameters are left undefined.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (evaluate), the resource (WCAG color contrast), and the input (foreground/background pair). It distinguishes from sibling tools like a11y_audit_html and a11y_audit_url, which focus on different accessibility audits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives, no usage context, exclusions, or prerequisites. It simply states what the tool does without any decision-making support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
a11y_headingsBInspect
Extract heading outline from HTML.
| Name | Required | Description | Default |
|---|---|---|---|
| html | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavior. It only states the basic action without disclosing what the outline contains (e.g., heading levels, text), whether it handles malformed HTML, or any security considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that immediately conveys the tool's purpose. It is front-loaded and contains no unnecessary words, though it could benefit from slightly more structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no output schema), the description should at least indicate what the returned outline looks like. It does not mention heading levels, text content, or ordering, leaving the agent without enough context to fully understand the tool's output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'html' with no description, and schema description coverage is 0%. The description does not explain expected format (e.g., full document vs fragment), encoding, or constraints, failing to add meaningful context beyond the parameter name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Extract' and the resource 'heading outline from HTML'. It is specific and distinguishes from sibling tools like a11y_audit_html or a11y_contrast, which focus on full audits or contrast analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. However, the description implies its purpose—extracting headings—which helps infer that it is for structural analysis, not full accessibility audits.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!