Skip to main content
Glama
jbuchan

Accessibility MCP Server

by jbuchan

test_accessibility

Test website accessibility against WCAG standards using Playwright and axe-core to identify compliance issues and generate detailed reports.

Instructions

Run accessibility tests on a website using Playwright and axe-core against WCAG standards

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the website to test for accessibility
wcagLevelNoWCAG compliance level to test againstAA
wcagVersionNoWCAG version to test against2.1
browserNoBrowser engine to use for testingchromium
includeScreenshotNoWhether to capture a screenshot of the page

Implementation Reference

  • Main MCP handler for 'test_accessibility' tool: validates URL, initializes tester, runs test, saves results to files, formats summary response.
    private async handleAccessibilityTest(request: MCPTestRequest): Promise<CallToolResult> {
      console.log(`Starting accessibility test for: ${request.url}`);
      
      // Validate URL
      try {
        new URL(request.url);
      } catch {
        throw new Error('Invalid URL provided');
      }
    
      // Initialize browser
      await this.tester.initialize();
    
      try {
        // Run the test
        const results = await this.tester.run(request.url, request.wcagLevel, request.criticality);
    
        const testResult: AccessibilityTestResult = {
          id: uuidv4(),
          timestamp: new Date(),
          url: request.url,
          options: {
            url: request.url,
            wcagLevel: request.wcagLevel,
            criticality: request.criticality,
            browser: request.browser || 'chromium',
          },
          axeResults: results,
          summary: {
            violations: results.violations.length,
            passes: results.passes.length,
            incomplete: results.incomplete.length,
            inapplicable: results.inapplicable.length,
          },
        };
    
        // Save results to file
        const filePath = await this.fileManager.saveTestResult(testResult);
        
        // Also save raw JSON results for potential programmatic access
        const rawFilePath = await this.fileManager.saveRawResults(testResult);
    
        const response: MCPTestResponse = {
          success: testResult.error === undefined,
          testId: testResult.id,
          filePath,
          summary: {
            violations: testResult.summary.violations,
            passes: testResult.summary.passes,
            url: testResult.url,
            timestamp: testResult.timestamp.toISOString()
          },
          error: testResult.error
        };
    
        const summary = testResult.error 
          ? `❌ Test failed: ${testResult.error}` 
          : `✅ Test completed successfully!\n\n` +
            `📊 **Summary for ${testResult.url}:**\n` +
            `- Violations: ${testResult.summary.violations}\n` +
            `- Passes: ${testResult.summary.passes}\n` +
            `- Incomplete: ${testResult.summary.incomplete}\n` +
            `- Inapplicable: ${testResult.summary.inapplicable}\n\n` +
            `📁 Results saved to: ${path.basename(filePath)}\n` +
            `📄 Raw data: ${path.basename(rawFilePath)}`;
    
        return {
          content: [
            {
              type: 'text',
              text: summary
            }
          ]
        };
    
      } finally {
        // Clean up browser
        await this.tester.cleanup();
      }
    }
  • Registration of 'test_accessibility' tool in ListTools response, defining name, description, and detailed inputSchema (url, wcagLevel, wcagVersion, browser, includeScreenshot).
    {
      name: 'test_accessibility',
      description: 'Run accessibility tests on a website using Playwright and axe-core against WCAG standards',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'The URL of the website to test for accessibility',
            format: 'uri'
          },
          wcagLevel: {
            type: 'string',
            enum: ['A', 'AA', 'AAA'],
            description: 'WCAG compliance level to test against',
            default: 'AA'
          },
          wcagVersion: {
            type: 'string',
            enum: ['2.1', '2.2'],
            description: 'WCAG version to test against',
            default: '2.1'
          },
          browser: {
            type: 'string',
            enum: ['chromium', 'firefox', 'webkit'],
            description: 'Browser engine to use for testing',
            default: 'chromium'
          },
          includeScreenshot: {
            type: 'boolean',
            description: 'Whether to capture a screenshot of the page',
            default: false
          }
        },
        required: ['url']
      }
    },
  • JSON input schema for the tool defining properties, types, enums, descriptions, and requirements.
    inputSchema: {
      type: 'object',
      properties: {
        url: {
          type: 'string',
          description: 'The URL of the website to test for accessibility',
          format: 'uri'
        },
        wcagLevel: {
          type: 'string',
          enum: ['A', 'AA', 'AAA'],
          description: 'WCAG compliance level to test against',
          default: 'AA'
        },
        wcagVersion: {
          type: 'string',
          enum: ['2.1', '2.2'],
          description: 'WCAG version to test against',
          default: '2.1'
        },
        browser: {
          type: 'string',
          enum: ['chromium', 'firefox', 'webkit'],
          description: 'Browser engine to use for testing',
          default: 'chromium'
        },
        includeScreenshot: {
          type: 'boolean',
          description: 'Whether to capture a screenshot of the page',
          default: false
        }
      },
      required: ['url']
    }
  • Core testing logic in AccessibilityTester.run(): launches Playwright browser, navigates to URL, runs axe-core with WCAG tags, filters by criticality, saves screenshot.
    public async run(url: string, wcagLevel?: string, criticality?: string[]): Promise<AxeResults> {
        if (!this.browser) {
            await this.initialize();
        }
    
        const context = await this.browser!.newContext();
        const page = await context.newPage();
        
        try {
            await page.goto(url);
    
            const axeBuilder = new AxeBuilder({ page });
    
            if (wcagLevel) {
                axeBuilder.withTags([wcagLevel]);
            }
    
            let results = await axeBuilder.analyze();
    
            // Filter violations by criticality if specified
            if (criticality && criticality.length > 0) {
                results.violations = results.violations.filter(violation => 
                    violation.impact && criticality.includes(violation.impact)
                );
            }
    
            const screenshotPath = await this.saveScreenshot(page);
            console.log(`Screenshot saved to ${screenshotPath}`);
    
            return results;
        } finally {
            await page.close();
            await context.close();
        }
    }
  • TypeScript interface MCPTestRequest defining typed input parameters for the handler.
    export interface MCPTestRequest {
      url: string;
      wcagLevel?: string;
      criticality?: string[];
      wcagVersion?: '2.1' | '2.2';
      browser?: 'chromium' | 'firefox' | 'webkit';
      includeScreenshot?: boolean;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the testing framework and standards, it doesn't describe what the tool actually returns (e.g., report format, error handling), performance characteristics, authentication needs, or potential side effects like network usage or resource consumption.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary elaboration. Every word contributes to understanding the tool's function, making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, no annotations, and no output schema, the description is insufficient. It doesn't explain what the tool returns (critical for a testing tool), error conditions, or behavioral details beyond the basic action. The context signals indicate significant gaps that the description doesn't address.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 for adequate but not enhanced parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Run accessibility tests'), target resource ('on a website'), and implementation details ('using Playwright and axe-core against WCAG standards'). It distinguishes itself from sibling tools like 'get_test_results' and 'list_test_results' by focusing on the testing action rather than retrieving results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings ('get_test_results', 'list_test_results'), nor does it mention any prerequisites, constraints, or alternative scenarios. It simply states what the tool does without contextual usage information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jbuchan/accessibility-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server