Skip to main content
Glama

Axcess — Design Accessibility Evaluation

Server Details

Evaluates UI designs for WCAG accessibility issues automated scanners miss. Paid via x402 on Base.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.6/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

The three tools have clearly distinct purposes with no overlap. evaluate_accessibility focuses on UI element accessibility, evaluate_typography handles typography-specific evaluation, and list_capabilities provides metadata about available tools. An agent can easily distinguish between them based on their specialized domains.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case formatting: evaluate_accessibility, evaluate_typography, and list_capabilities. This predictable naming convention makes the tool set easy to navigate and understand at a glance.

Tool Count3/5

With only three tools, the set feels somewhat thin for a server focused on 'Design Accessibility Evaluation.' While the two evaluation tools cover distinct aspects, the domain suggests potential gaps in areas like color contrast evaluation or mobile-specific accessibility checks that aren't addressed.

Completeness3/5

The server covers two specific evaluation domains (general UI accessibility and typography) with dedicated tools, but lacks broader coverage expected for design accessibility. Missing are tools for color contrast evaluation, mobile/touch-specific checks beyond touch targets, and integration with design tools beyond the mentioned Figma example. The surface is functional but incomplete for comprehensive design evaluation.

Available Tools

3 tools
evaluate_accessibilityEvaluate UI AccessibilityA
Read-onlyIdempotent
Inspect

Evaluates UI elements for accessibility issues that automated scanners miss.

COST: $0.01 USDC via x402 on Base-compatible EVM network per call.

Checks beyond what axe/Lighthouse/WAVE catch at the design stage:

  • Touch targets below 24×24px (WCAG 2.5.8 AA hard fail)

  • Touch targets below 44×44px (WCAG 2.5.5 AAA recommended)

  • Information conveyed by color alone without a secondary indicator (WCAG 1.4.1)

  • Missing focus indicators on interactive elements (WCAG 2.4.7)

  • Focus rings thinner than 2px (WCAG 2.4.11)

  • Focus ring contrast below 3:1 against adjacent background (WCAG 2.4.11)

  • Interactive elements below the practical usability height floor

Args:

  • elements: Array of 1–50 UI element objects

  • screen_name: Optional label for the evaluation report

Each element requires: element_type. Provide width_px/height_px for touch target checks. Provide uses_color_only + secondary indicator flags for 1.4.1 checks. Provide is_interactive + focus_visible + focus indicator properties for focus checks.

Returns: Structured report with:

  • Per-element scores (0–100) and specific issues

  • Severity levels (critical/major/minor) with WCAG references

  • What automated tools miss and why

  • Concrete fix recommendations

  • Overall score and verdict (pass/needs_work/fail)

  • Top issues sorted by severity

ParametersJSON Schema
NameRequiredDescriptionDefault
elementsYesArray of UI elements to evaluate
screen_nameNoName of the screen or component being evaluated
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations, including cost details ($0.01 USDC per call), specific WCAG checks performed, and output structure (e.g., per-element scores, severity levels). Annotations cover read-only, non-destructive, and idempotent hints, but the description enriches this with practical usage insights without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by cost, specific checks, parameter guidance, and return details. It is appropriately sized but could be slightly more concise by integrating some details more tightly; every sentence adds value, though minor trimming is possible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of accessibility evaluation, no output schema, and rich annotations, the description is largely complete. It covers purpose, cost, checks, parameter semantics, and return structure. However, it could briefly mention error handling or limitations to achieve full completeness for such a detailed tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by explaining parameter requirements in plain language (e.g., 'Each element requires: element_type' and details on what to provide for specific checks), clarifying the semantics beyond the schema's technical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('evaluates UI elements for accessibility issues') and distinguishes it from automated scanners like axe/Lighthouse/WAVE. It explicitly mentions what it checks beyond those tools, making its scope distinct from potential siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by stating it checks 'accessibility issues that automated scanners miss' and lists specific WCAG criteria, which implies when to use this tool for deeper evaluation. However, it does not explicitly mention when not to use it or name alternatives among siblings, though the context suggests it complements rather than replaces automated tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

evaluate_typographyEvaluate Typography AccessibilityA
Read-onlyIdempotent
Inspect

Evaluates typography elements against a principled accessibility rubric.

COST: $0.05 USDC via x402 on Base-compatible EVM network per call.

Goes beyond what axe/Lighthouse/WAVE can check — evaluates design judgment, not just numeric compliance. Catches issues like:

  • Contrast that passes WCAG 4.5:1 but fails visually due to thin font weight

  • Body text that meets minimum size requirements but is still too small for comfortable reading

  • Line heights that technically comply but impede readability for dyslexic users

  • Extended all-caps or italic text that passes all AA criteria but impairs reading

  • Text on gradient/image backgrounds where scanner sampling is unreliable

  • Heading sizes that are technically correct but visually indistinct from body

Args:

  • elements: Array of 1–50 typography element objects with font/color properties

  • screen_name: Optional label for the evaluation report

Each element requires: element_type, font_size, font_weight, line_height, color_hex, background_color_hex.

Returns: Structured report with:

  • Per-element scores (0–100)

  • Specific issues with severity (critical/major/minor)

  • WCAG references and what automated tools miss

  • Concrete fix recommendations

  • Overall score and verdict (pass/needs_work/fail)

  • Top issues sorted by severity

Example use: Extract text layer properties from Figma using get_design_context, pass the typography properties to this tool for evaluation before shipping.

ParametersJSON Schema
NameRequiredDescriptionDefault
elementsYesArray of typography elements to evaluate
screen_nameNoName of the screen or component being evaluated
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses the cost ('$0.05 USDC via x402 on Base-compatible EVM network per call'), explains the tool's unique approach ('evaluates design judgment, not just numeric compliance'), and provides concrete examples of what it catches. Annotations cover read-only/idempotent/non-destructive behavior, so the description appropriately focuses on operational context rather than repeating safety information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: it starts with the core purpose, immediately states cost, then explains the unique value proposition with bulleted examples, details parameters and returns, and ends with a concrete usage example. Every sentence serves a clear purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (accessibility evaluation with nuanced design judgment), the description provides excellent context: it explains the tool's unique approach, lists specific issue types, details required parameters, describes the return structure, and gives a practical usage example. The annotations cover safety aspects, and while there's no output schema, the description thoroughly documents the return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds meaningful context: it specifies the 'elements' array requires 'element_type, font_size, font_weight, line_height, color_hex, background_color_hex' and gives an example workflow ('Extract text layer properties from Figma'). It also clarifies the purpose of 'screen_name' as 'Optional label for the evaluation report,' adding value beyond the schema's technical description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'evaluates typography elements against a principled accessibility rubric' with specific examples of what it catches (contrast issues, font weight problems, etc.). It distinguishes from sibling tools by specifying it goes 'beyond what axe/Lighthouse/WAVE can check' and focuses on 'design judgment, not just numeric compliance.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Extract text layer properties from Figma using get_design_context, pass the typography properties to this tool for evaluation before shipping.' It also distinguishes from alternatives by stating it goes beyond automated tools like axe/Lighthouse/WAVE, and the sibling tool list suggests clear alternatives (evaluate_accessibility, list_capabilities).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_capabilitiesList Axcess Capabilities & PricingA
Read-onlyIdempotent
Inspect

Returns available evaluation tools, what they check, and their pricing. Call this first to understand what Axcess can evaluate and how much each evaluation costs.

This tool is FREE. All evaluation tools require USDC payment on Base network.

Returns: JSON with tool descriptions, pricing, and rubric categories.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations already indicate this is a safe, read-only, idempotent operation, the description discloses that this tool is FREE (important cost context) and that all evaluation tools require USDC payment on Base network (critical system behavior). It also explains the return format (JSON with tool descriptions, pricing, and rubric categories).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise with three sentences that each earn their place: first states the core functionality, second provides critical usage guidance and cost context, third specifies the return format. No wasted words, front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a simple discovery tool with 0 parameters, comprehensive annotations, and no output schema, the description provides complete context. It explains what the tool does, when to use it, cost implications, and return format - covering all necessary aspects for an agent to understand and invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since there are none, and instead focuses on what the tool returns and its purpose. No parameter information is needed or expected.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Returns available evaluation tools, what they check, and their pricing') and distinguishes it from siblings by explaining it should be called first to understand what Axcess can evaluate. It explicitly names the resource (evaluation tools) and scope (capabilities & pricing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this first to understand what Axcess can evaluate') and distinguishes it from alternatives by positioning it as an initial discovery tool before using evaluation tools like evaluate_accessibility and evaluate_typography. It also specifies that all evaluation tools require payment while this one is free.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources