Skip to main content
Glama

playwright_assert_response

Validate HTTP responses in browser automation tests by checking response body content against expected values after initiating a wait operation.

Instructions

Wait for and validate a previously initiated HTTP response wait operation.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYesIdentifier of the HTTP response initially expected using `Playwright_expect_response`.
valueNoData to expect in the body of the HTTP response. If provided, the assertion will fail if this value is not found in the response body.

Implementation Reference

  • The AssertResponseTool class implements the core logic for the 'playwright_assert_response' tool. It retrieves a previously set response promise, awaits the response, parses the JSON body, optionally asserts if a specific value is present, and returns detailed success or error information.
    export class AssertResponseTool extends BrowserToolBase {
      /**
       * Execute the assert response tool
       */
      async execute(args: AssertResponseArgs, context: ToolContext): Promise<ToolResponse> {
        return this.safeExecute(context, async () => {
          if (!args.id) {
            return createErrorResponse("Missing required parameter: id must be provided");
          }
    
          const responsePromise = responsePromises.get(args.id);
          if (!responsePromise) {
            return createErrorResponse(`No response wait operation found with ID: ${args.id}`);
          }
    
          try {
            const response = await responsePromise;
            const body = await response.json();
    
            if (args.value) {
              const bodyStr = JSON.stringify(body);
              if (!bodyStr.includes(args.value)) {
                const messages = [
                  `Response body does not contain expected value: ${args.value}`,
                  `Actual body: ${bodyStr}`,
                ];
                return createErrorResponse(messages.join("\n"));
              }
            }
    
            const messages = [
              `Response assertion for ID ${args.id} successful`,
              `URL: ${response.url()}`,
              `Status: ${response.status()}`,
              `Body: ${JSON.stringify(body, null, 2)}`,
            ];
            return createSuccessResponse(messages.join("\n"));
          } catch (error) {
            return createErrorResponse(`Failed to assert response: ${(error as Error).message}`);
          } finally {
            responsePromises.delete(args.id);
          }
        });
      }
    }
  • The input schema definition for the 'playwright_assert_response' tool, specifying parameters 'id' (required) and optional 'value' for response body assertion.
    {
      name: "playwright_assert_response",
      description: "Wait for and validate a previously initiated HTTP response wait operation.",
      inputSchema: {
        type: "object",
        properties: {
          id: {
            type: "string",
            description: "Identifier of the HTTP response initially expected using `Playwright_expect_response`.",
          },
          value: {
            type: "string",
            description:
              "Data to expect in the body of the HTTP response. If provided, the assertion will fail if this value is not found in the response body.",
          },
        },
        required: ["id"],
      },
    },
  • Registration of the 'playwright_assert_response' tool in the main tool dispatch switch statement, delegating execution to the AssertResponseTool instance.
    case "playwright_assert_response":
      return await assertResponseTool.execute(args, context);
  • src/tools.ts:506-506 (registration)
    The tool name is listed in the BROWSER_TOOLS array, used for conditional browser launching.
    "playwright_assert_response",
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'Wait for and validate' and that assertion fails if the value isn't found, which hints at blocking behavior and validation logic. However, it lacks details on timeouts, error handling, or what happens if the response never arrives, making it insufficient for a mutation/validation tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It directly communicates the tool's function, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete. It covers the basic purpose and hints at validation behavior but lacks details on return values, error cases, or integration with sibling tools. For a tool that performs waiting and assertion with parameters, this leaves gaps in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('id' and 'value') well. The description adds minimal value beyond this, as it doesn't explain parameter interactions or provide additional context like format examples. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Wait for and validate a previously initiated HTTP response wait operation.' It specifies the action (wait for and validate) and the resource (HTTP response from a previous wait operation), making it understandable. However, it doesn't explicitly differentiate from sibling tools like 'playwright_expect_response', which likely initiates the wait, so it misses full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by referencing 'previously initiated HTTP response wait operation' and 'initially expected using `Playwright_expect_response`', suggesting this tool is used after that sibling. However, it doesn't provide explicit when-to-use guidance, alternatives, or exclusions, leaving some ambiguity about its role in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aakashH242/mcp-playwright'

If you have feedback or need assistance with the MCP directory API, please join our Discord server