Skip to main content
Glama
debugg-ai

Debugg AI MCP

Official
by debugg-ai

debugg_ai_test_page_changes

Test and validate UI changes by simulating user interactions on localhost pages. Define features to assess, specify ports, and provide repository details for accurate evaluation.

Instructions

Use DebuggAI to run & and test UI changes that have been made with its User emulation agents

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
branchNameNoCurrent branch name
descriptionYesDescription of what page (relative url) and features should be tested.
filePathNoAbsolute path to the file to test
localPortNoLocalhost port number where the app is running. Eg. 3000
repoNameNoThe name of the current repository
repoPathNoLocal path of the repo root

Implementation Reference

  • The primary handler function that orchestrates the E2E testing workflow: initializes DebuggAI client, creates and executes tests, handles progress updates, extracts results including screenshots and GIFs, and formats the response.
    export async function testPageChangesHandler(
      input: TestPageChangesInput,
      context: ToolContext,
      progressCallback?: ProgressCallback
    ): Promise<ToolResponse> {
      const startTime = Date.now();
      const { description } = input;
      
      logger.toolStart('debugg_ai_test_page_changes', input);
    
      try {
        // Use the progress callback from the main handler
    
        // Merge input with config defaults, providing reasonable fallbacks only when needed
        const params = {
          localPort: input.localPort ?? config.defaults.localPort ?? 3000,
          repoName: input.repoName ?? config.defaults.repoName ?? 'unknown-repo',
          branchName: input.branchName ?? config.defaults.branchName ?? 'main',
          repoPath: input.repoPath ?? config.defaults.repoPath ?? process.cwd(),
          filePath: input.filePath ?? config.defaults.filePath ?? '',
        };
    
        logger.info('Starting E2E test with parameters', { 
          description,
          ...params,
          progressToken: context.progressToken 
        });
    
        // Initialize DebuggAI client and runner
        const client = new DebuggAIServerClient(config.api.key);
        await client.init(); // Make sure client is fully initialized
        const e2eTestRunner = new E2eTestRunner(client);
    
        // Create new E2E test
        const e2eRun = await e2eTestRunner.createNewE2eTest(
          params.localPort,
          description,
          params.repoName,
          params.branchName,
          params.repoPath,
          params.filePath
        );
    
        if (!e2eRun) {
          throw new Error('Failed to create E2E test run');
        }
    
        logger.info('E2E test created successfully', { runId: e2eRun.id });
    
        // Send initial progress notification
        if (progressCallback) {
          await progressCallback({
            progress: 0,
            total: 20,
            message: 'E2E test started'
          });
        }
    
        // Handle E2E run execution with progress tracking
        const finalRun = await e2eTestRunner.handleE2eRun(e2eRun, async (update) => {
          logger.info(`E2E test status update: ${update.status}`, { status: update.status });
          
          const curStep = update.conversations?.[0]?.messages?.length || 0;
          const updateMessage = update.conversations?.[0]?.messages?.[curStep - 1]?.jsonContent?.currentState?.nextGoal;
          
          logger.progress(
            updateMessage || `Step ${curStep}`,
            curStep,
            20
          );
    
          // Send MCP progress notification to reset timeout
          if (progressCallback) {
            await progressCallback({
              progress: curStep,
              total: 20,
              message: updateMessage || `Processing step ${curStep}`
            });
          }
        });
    
        const duration = Date.now() - startTime;
        
        if (!finalRun) {
          throw new Error('E2E test execution failed');
        }
    
        // Extract results
        const testResult: E2ETestResult = {
          testOutcome: finalRun.outcome,
          testDetails: finalRun.conversations?.[0]?.messages?.map(
            (message) => message.jsonContent?.currentState?.nextGoal
          ).filter(Boolean),
          finalScreenshot: finalRun.finalScreenshot || undefined,
          runGif: finalRun.runGif || undefined,
        };
    
        logger.info('E2E test completed successfully', { 
          testOutcome: testResult.testOutcome,
          duration: `${duration}ms`
        });
    
        // Prepare response content
        const responseContent: ToolResponse['content'] = [
          {
            type: 'text',
            text: JSON.stringify({
              testOutcome: testResult.testOutcome,
              testDetails: testResult.testDetails,
              executionTime: `${duration}ms`,
              timestamp: new Date().toISOString()
            }, null, 2)
          }
        ];
    
        // Add screenshot if available
        if (testResult.finalScreenshot) {
          try {
            const response = await fetch(testResult.finalScreenshot);
            if (response.ok) {
              const arrayBuffer = await response.arrayBuffer();
              const base64Image = Buffer.from(arrayBuffer).toString('base64');
              
              responseContent.push({
                type: 'image',
                data: base64Image,
                mimeType: 'image/png'
              });
    
              logger.info('Screenshot included in response');
            }
          } catch (error) {
            logger.warn('Failed to fetch screenshot', { 
              screenshotUrl: testResult.finalScreenshot,
              error: error instanceof Error ? error.message : String(error)
            });
          }
        }
    
        logger.toolComplete('debugg_ai_test_page_changes', duration);
    
        return { content: responseContent };
    
      } catch (error) {
        const duration = Date.now() - startTime;
        logger.toolError('debugg_ai_test_page_changes', error as Error, duration);
        
        throw handleExternalServiceError(error, 'DebuggAI', 'test execution');
      }
    }
  • Zod schema for validating tool inputs, defining required 'description' and optional parameters like localPort, repo details.
    export const TestPageChangesInputSchema = z.object({
      description: z.string().min(1, 'Description is required'),
      localPort: z.number().int().min(1).max(65535).optional(),
      filePath: z.string().optional(),
      repoName: z.string().optional(),
      branchName: z.string().optional(),
      repoPath: z.string().optional(),
    });
    
    export type TestPageChangesInput = z.infer<typeof TestPageChangesInputSchema>;
  • tools/index.ts:14-60 (registration)
    Registers the tool in the main tools array (line 41) and validatedTools array (line 60), and populates the toolRegistry Map for lookup.
    import { testPageChangesTool, validatedTestPageChangesTool } from './testPageChanges.js';
    import { 
      startLiveSessionTool,
      stopLiveSessionTool,
      getLiveSessionStatusTool,
      getLiveSessionLogsTool,
      getLiveSessionScreenshotTool,
      validatedLiveSessionTools
    } from './liveSession.js';
    import { 
      listTestsTool,
      listTestSuitesTool,
      createTestSuiteTool, 
      createCommitSuiteTool,
      listCommitSuitesTool,
      getTestStatusTool,
      validatedE2ESuiteTools 
    } from './e2eSuites.js';
    import { 
      quickScreenshotTool,
      validatedQuickScreenshotTool 
    } from './quickScreenshot.js';
    
    /**
     * All available tools for MCP server
     */
    export const tools: Tool[] = [
      testPageChangesTool,
      startLiveSessionTool,
      stopLiveSessionTool,
      getLiveSessionStatusTool,
      getLiveSessionLogsTool,
      getLiveSessionScreenshotTool,
      listTestsTool,
      listTestSuitesTool,
      createTestSuiteTool,
      createCommitSuiteTool,
      listCommitSuitesTool,
      getTestStatusTool,
      quickScreenshotTool,
    ];
    
    /**
     * All validated tools with handlers
     */
    export const validatedTools: ValidatedTool[] = [
      validatedTestPageChangesTool,
  • Basic Tool definition including name, description, and inline JSON schema for MCP compatibility; validated version references Zod schema and handler.
    export const testPageChangesTool: Tool = {
      name: "debugg_ai_test_page_changes",
      description: "Run end-to-end browser tests using AI agents that interact with your web application like real users. Tests specific pages, features, or workflows by clicking buttons, filling forms, and validating behavior. Returns screenshots and detailed results.",
      inputSchema: {
        type: "object",
        properties: {
          description: {
            type: "string",
            description: "Natural language description of what to test (e.g., 'Test login form on /login page' or 'Click the submit button and verify success message appears')",
            minLength: 1
          },
          localPort: {
            type: "number",
            description: "Port number where your local development server is running (e.g., 3000 for React, 8080 for Vue)",
            minimum: 1,
            maximum: 65535
          },
          filePath: {
            type: "string",
            description: "Absolute path to the main file being tested (helps provide context to the AI)"
          },
          repoName: {
            type: "string",
            description: "Name of your Git repository (e.g., 'my-web-app')"
          },
          branchName: {
            type: "string",
            description: "Current Git branch name (e.g., 'main', 'feature/login')"
          },
          repoPath: {
            type: "string",
            description: "Absolute path to your project's root directory"
          },
        },
        required: ["description"],
        additionalProperties: false
      },
    };
  • tools/index.ts:69-74 (registration)
    Populates the toolRegistry Map with all validated tools, enabling lookup by name including 'debugg_ai_test_page_changes'.
    export const toolRegistry = new Map<string, ValidatedTool>();
    
    // Initialize tool registry
    for (const tool of validatedTools) {
      toolRegistry.set(tool.name, tool);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'run & test UI changes' but lacks details on execution behavior, such as whether it's destructive, requires specific permissions, handles errors, or has rate limits. This leaves significant gaps in understanding how the tool operates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is appropriately sized and front-loaded, though it could be slightly more structured with brief usage hints to improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, and no annotations), the description is insufficient. It lacks information on what the tool returns, how results are presented, error handling, or any behavioral context needed for effective use, making it incomplete for an agent to operate confidently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning about parameters beyond what's in the schema, such as explaining relationships between inputs or usage examples. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Use DebuggAI to run & test UI changes that have been made with its User emulation agents.' It specifies the action (run & test), the resource (UI changes), and the method (User emulation agents). However, without sibling tools, it cannot demonstrate differentiation from alternatives, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus other testing methods or prerequisites. It states what the tool does but offers no context about appropriate scenarios, limitations, or alternatives, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/debugg-ai/debugg-ai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server