Skip to main content
Glama

accessibility_audit

Run automated accessibility audits to detect WCAG 2.1 Level A and AA violations, providing severity-based issue reports with specific fix instructions for web pages.

Instructions

Run an automated accessibility audit using axe-core. Checks for WCAG 2.1 Level A and AA violations, reporting issues by severity with specific fix instructions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the page to audit

Implementation Reference

  • The main handler function for the accessibility_audit tool, which orchestrates browser navigation, axe-core injection, and audit execution.
    export async function runAccessibilityAudit(
      url: string
    ): Promise<AccessibilityResult> {
      const page = await createPage(1440, 900);
    
      try {
        await navigateAndWait(page, url, 500);
    
        // Inject axe-core into the page
        const axeSource = await getAxeSource();
        await page.evaluate(axeSource);
    
        // Run the audit
        const rawResults = await page.evaluate(async () => {
          // @ts-expect-error axe is injected at runtime
          const results = await window.axe.run(document, {
            runOnly: {
              type: "tag",
              values: ["wcag2a", "wcag2aa", "wcag21a", "wcag21aa", "best-practice"],
            },
          });
    
          return {
            violations: results.violations.map(
              (v: {
                id: string;
                impact: string;
                description: string;
                help: string;
                helpUrl: string;
                nodes: Array<{
                  target: string[];
                  html: string;
                  failureSummary: string;
                }>;
              }) => ({
                id: v.id,
                impact: v.impact,
                description: v.description,
                help: v.help,
                helpUrl: v.helpUrl,
                nodes: v.nodes.slice(0, 5).map(
                  (n: {
                    target: string[];
                    html: string;
                    failureSummary: string;
                  }) => ({
                    target: n.target,
                    html: n.html.slice(0, 200),
                    failureSummary: n.failureSummary,
                  })
                ),
              })
            ),
            passCount: results.passes.length,
            incompleteCount: results.incomplete.length,
            inapplicableCount: results.inapplicable.length,
          };
        });
    
        return {
          url,
          timestamp: new Date().toISOString(),
          violations: rawResults.violations as readonly AccessibilityViolation[],
          passes: rawResults.passCount,
          incomplete: rawResults.incompleteCount,
          inapplicable: rawResults.inapplicableCount,
        };
      } finally {
        await closePage(page);
      }
    }
  • Helper function to format the audit results into a readable Markdown report.
    export function formatAccessibilityReport(
      result: AccessibilityResult
    ): string {
      const lines: string[] = [
        `## Accessibility Audit Results`,
        ``,
        `**URL:** ${result.url}`,
        `**Scanned:** ${result.timestamp}`,
        `**Violations:** ${result.violations.length}`,
        `**Passes:** ${result.passes}`,
        `**Incomplete:** ${result.incomplete}`,
        ``,
      ];
    
      if (result.violations.length === 0) {
        lines.push("No accessibility violations found.");
        return lines.join("\n");
      }
    
      // Group by impact
      const byImpact = {
        critical: [] as AccessibilityViolation[],
        serious: [] as AccessibilityViolation[],
        moderate: [] as AccessibilityViolation[],
        minor: [] as AccessibilityViolation[],
      };
    
      for (const violation of result.violations) {
        const bucket = byImpact[violation.impact];
        if (bucket) {
          bucket.push(violation);
        }
      }
    
      for (const [impact, violations] of Object.entries(byImpact)) {
        if (violations.length === 0) continue;
    
        lines.push(`### ${impact.toUpperCase()} (${violations.length})`);
        lines.push(``);
    
        for (const v of violations) {
          lines.push(`- **${v.id}**: ${v.help}`);
          lines.push(`  ${v.description}`);
          lines.push(`  [Learn more](${v.helpUrl})`);
    
          for (const node of v.nodes.slice(0, 3)) {
            lines.push(`  - Element: \`${node.target.join(" > ")}\``);
            lines.push(`    Fix: ${node.failureSummary}`);
          }
    
          lines.push(``);
        }
      }
    
      return lines.join("\n");
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses the audit engine (axe-core), scope (Level A and AA), and output characteristics (severity levels, fix instructions). However, it does not explicitly confirm the read-only nature or authentication requirements for protected URLs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. The first front-loads the core action and technology; the second specifies standards and output format. Every clause provides essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input and lack of output schema, the description appropriately compensates by describing the report structure (severity, fix instructions). However, with zero annotations, it could be improved by explicitly stating the read-only, non-destructive nature of the audit.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'url' parameter, establishing a baseline of 3. The description does not add additional parameter semantics (such as whether the URL must be publicly accessible, authentication handling, or example formats) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Run', 'Checks') and identifies the exact resource (WCAG 2.1 Level A/AA violations) and technology (axe-core). It clearly distinguishes from siblings like lighthouse_audit and performance_audit by specifying WCAG standards and the axe-core engine.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the WCAG 2.1 specification implies the use case (accessibility compliance), there is no explicit guidance on when to choose this over lighthouse_audit (which also provides accessibility scoring) or when not to use it. Usage is implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prembobby39-gif/uimax-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server