Axcess — Design Accessibility Evaluation
Server Details
Evaluates UI designs for WCAG accessibility issues automated scanners miss. Paid via x402 on Base.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.6/5 across 3 of 3 tools scored.
The three tools have clearly distinct purposes with no overlap. evaluate_accessibility focuses on UI element accessibility, evaluate_typography handles typography-specific evaluation, and list_capabilities provides metadata about available tools. An agent can easily distinguish between them based on their specialized domains.
All tool names follow a consistent verb_noun pattern with snake_case formatting: evaluate_accessibility, evaluate_typography, and list_capabilities. This predictable naming convention makes the tool set easy to navigate and understand at a glance.
With only three tools, the set feels somewhat thin for a server focused on 'Design Accessibility Evaluation.' While the two evaluation tools cover distinct aspects, the domain suggests potential gaps in areas like color contrast evaluation or mobile-specific accessibility checks that aren't addressed.
The server covers two specific evaluation domains (general UI accessibility and typography) with dedicated tools, but lacks broader coverage expected for design accessibility. Missing are tools for color contrast evaluation, mobile/touch-specific checks beyond touch targets, and integration with design tools beyond the mentioned Figma example. The surface is functional but incomplete for comprehensive design evaluation.
Available Tools
3 toolsevaluate_accessibilityEvaluate UI AccessibilityARead-onlyIdempotentInspect
Evaluates UI elements for accessibility issues that automated scanners miss.
COST: $0.01 USDC via x402 on Base-compatible EVM network per call.
Checks beyond what axe/Lighthouse/WAVE catch at the design stage:
Touch targets below 24×24px (WCAG 2.5.8 AA hard fail)
Touch targets below 44×44px (WCAG 2.5.5 AAA recommended)
Information conveyed by color alone without a secondary indicator (WCAG 1.4.1)
Missing focus indicators on interactive elements (WCAG 2.4.7)
Focus rings thinner than 2px (WCAG 2.4.11)
Focus ring contrast below 3:1 against adjacent background (WCAG 2.4.11)
Interactive elements below the practical usability height floor
Args:
elements: Array of 1–50 UI element objects
screen_name: Optional label for the evaluation report
Each element requires: element_type. Provide width_px/height_px for touch target checks. Provide uses_color_only + secondary indicator flags for 1.4.1 checks. Provide is_interactive + focus_visible + focus indicator properties for focus checks.
Returns: Structured report with:
Per-element scores (0–100) and specific issues
Severity levels (critical/major/minor) with WCAG references
What automated tools miss and why
Concrete fix recommendations
Overall score and verdict (pass/needs_work/fail)
Top issues sorted by severity
| Name | Required | Description | Default |
|---|---|---|---|
| elements | Yes | Array of UI elements to evaluate | |
| screen_name | No | Name of the screen or component being evaluated |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond annotations, including cost details ($0.01 USDC per call), specific WCAG checks performed, and output structure (e.g., per-element scores, severity levels). Annotations cover read-only, non-destructive, and idempotent hints, but the description enriches this with practical usage insights without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by cost, specific checks, parameter guidance, and return details. It is appropriately sized but could be slightly more concise by integrating some details more tightly; every sentence adds value, though minor trimming is possible.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of accessibility evaluation, no output schema, and rich annotations, the description is largely complete. It covers purpose, cost, checks, parameter semantics, and return structure. However, it could briefly mention error handling or limitations to achieve full completeness for such a detailed tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by explaining parameter requirements in plain language (e.g., 'Each element requires: element_type' and details on what to provide for specific checks), clarifying the semantics beyond the schema's technical definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('evaluates UI elements for accessibility issues') and distinguishes it from automated scanners like axe/Lighthouse/WAVE. It explicitly mentions what it checks beyond those tools, making its scope distinct from potential siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating it checks 'accessibility issues that automated scanners miss' and lists specific WCAG criteria, which implies when to use this tool for deeper evaluation. However, it does not explicitly mention when not to use it or name alternatives among siblings, though the context suggests it complements rather than replaces automated tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
evaluate_typographyEvaluate Typography AccessibilityARead-onlyIdempotentInspect
Evaluates typography elements against a principled accessibility rubric.
COST: $0.05 USDC via x402 on Base-compatible EVM network per call.
Goes beyond what axe/Lighthouse/WAVE can check — evaluates design judgment, not just numeric compliance. Catches issues like:
Contrast that passes WCAG 4.5:1 but fails visually due to thin font weight
Body text that meets minimum size requirements but is still too small for comfortable reading
Line heights that technically comply but impede readability for dyslexic users
Extended all-caps or italic text that passes all AA criteria but impairs reading
Text on gradient/image backgrounds where scanner sampling is unreliable
Heading sizes that are technically correct but visually indistinct from body
Args:
elements: Array of 1–50 typography element objects with font/color properties
screen_name: Optional label for the evaluation report
Each element requires: element_type, font_size, font_weight, line_height, color_hex, background_color_hex.
Returns: Structured report with:
Per-element scores (0–100)
Specific issues with severity (critical/major/minor)
WCAG references and what automated tools miss
Concrete fix recommendations
Overall score and verdict (pass/needs_work/fail)
Top issues sorted by severity
Example use: Extract text layer properties from Figma using get_design_context, pass the typography properties to this tool for evaluation before shipping.
| Name | Required | Description | Default |
|---|---|---|---|
| elements | Yes | Array of typography elements to evaluate | |
| screen_name | No | Name of the screen or component being evaluated |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it discloses the cost ('$0.05 USDC via x402 on Base-compatible EVM network per call'), explains the tool's unique approach ('evaluates design judgment, not just numeric compliance'), and provides concrete examples of what it catches. Annotations cover read-only/idempotent/non-destructive behavior, so the description appropriately focuses on operational context rather than repeating safety information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded: it starts with the core purpose, immediately states cost, then explains the unique value proposition with bulleted examples, details parameters and returns, and ends with a concrete usage example. Every sentence serves a clear purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (accessibility evaluation with nuanced design judgment), the description provides excellent context: it explains the tool's unique approach, lists specific issue types, details required parameters, describes the return structure, and gives a practical usage example. The annotations cover safety aspects, and while there's no output schema, the description thoroughly documents the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds meaningful context: it specifies the 'elements' array requires 'element_type, font_size, font_weight, line_height, color_hex, background_color_hex' and gives an example workflow ('Extract text layer properties from Figma'). It also clarifies the purpose of 'screen_name' as 'Optional label for the evaluation report,' adding value beyond the schema's technical description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'evaluates typography elements against a principled accessibility rubric' with specific examples of what it catches (contrast issues, font weight problems, etc.). It distinguishes from sibling tools by specifying it goes 'beyond what axe/Lighthouse/WAVE can check' and focuses on 'design judgment, not just numeric compliance.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Extract text layer properties from Figma using get_design_context, pass the typography properties to this tool for evaluation before shipping.' It also distinguishes from alternatives by stating it goes beyond automated tools like axe/Lighthouse/WAVE, and the sibling tool list suggests clear alternatives (evaluate_accessibility, list_capabilities).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_capabilitiesList Axcess Capabilities & PricingARead-onlyIdempotentInspect
Returns available evaluation tools, what they check, and their pricing. Call this first to understand what Axcess can evaluate and how much each evaluation costs.
This tool is FREE. All evaluation tools require USDC payment on Base network.
Returns: JSON with tool descriptions, pricing, and rubric categories.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations already indicate this is a safe, read-only, idempotent operation, the description discloses that this tool is FREE (important cost context) and that all evaluation tools require USDC payment on Base network (critical system behavior). It also explains the return format (JSON with tool descriptions, pricing, and rubric categories).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise with three sentences that each earn their place: first states the core functionality, second provides critical usage guidance and cost context, third specifies the return format. No wasted words, front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple discovery tool with 0 parameters, comprehensive annotations, and no output schema, the description provides complete context. It explains what the tool does, when to use it, cost implications, and return format - covering all necessary aspects for an agent to understand and invoke this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since there are none, and instead focuses on what the tool returns and its purpose. No parameter information is needed or expected.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Returns available evaluation tools, what they check, and their pricing') and distinguishes it from siblings by explaining it should be called first to understand what Axcess can evaluate. It explicitly names the resource (evaluation tools) and scope (capabilities & pricing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this first to understand what Axcess can evaluate') and distinguishes it from alternatives by positioning it as an initial discovery tool before using evaluation tools like evaluate_accessibility and evaluate_typography. It also specifies that all evaluation tools require payment while this one is free.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!