Skip to main content
Glama

semantic_search_requests

Search for specific requests within a page URL using semantic querying to retrieve the top 10 relevant results. Ideal for analyzing and extracting targeted browser interactions.

Instructions

Semantically search for requests that occurred within a page URL. Returns the top 10 results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
page_urlYesThe page within which to search for requests
queryYesYour search request. Make this specific and detailed to get the best results

Implementation Reference

  • The main handler function that performs semantic search on requests using embeddings and cosine similarity, returning top 10 matches.
    export async function semanticSearchRequestsSentTransformer(
      query: string,
      requests: Array<RequestRecord>,
      pipeline: FeatureExtractionPipeline
    ): Promise<Array<RequestRecord & { similarity: number }>> {
      // Get embedding for the query
      const queryEmbedding = await getEmbeddingSentTransformer(query, pipeline);
    
      // Calculate cosine similarity scores for all requests
      const scoredRequests = requests.map((request) => {
        // Compute cosine similarity between query and request embeddings
        const similarity = cosineSimilarity(queryEmbedding, request.embedding);
        return { ...request, similarity };
      });
    
      // Sort by similarity score (highest first) and take top 10
      return scoredRequests
        .sort((a, b) => b.similarity - a.similarity)
        .slice(0, 10);
    }
  • index.ts:70-88 (registration)
    Registration of the 'semantic_search_requests' tool in the TOOLS array, including name, description, and input schema.
      name: "semantic_search_requests",
      description:
        "Semantically search for requests that occurred within a page URL. Returns the top 10 results.",
      inputSchema: {
        type: "object",
        properties: {
          query: {
            type: "string",
            description:
              "Your search request. Make this specific and detailed to get the best results",
          },
          page_url: {
            type: "string",
            description: "The page within which to search for requests",
          },
        },
        required: ["query", "page_url"],
      },
    },
  • Tool dispatch handler in the main switch that calls the semantic search transformer function.
    case "semantic_search_requests": {
      if (!pipeline) {
        return {
          content: [{ type: "text", text: "Model not defined" }],
          isError: true,
        };
      }
      const searchResults = await semanticSearchRequestsSentTransformer(
        args.query,
        requests.get(args.page_url),
        pipeline
      );
      const withoutEmbedding = searchResults.map(
        ({ embedding, similarity, ...rest }) => rest
      );
      return {
        content: [
          { type: "text", text: JSON.stringify(withoutEmbedding, null, 2) },
        ],
        isError: false,
      };
    }
  • Helper function to compute cosine similarity used in semantic search.
    function cosineSimilarity(a: number[], b: number[]): number {
      const dotProduct = a.reduce((sum, val, i) => sum + val * b[i], 0);
      const magnitudeA = Math.sqrt(a.reduce((sum, val) => sum + val * val, 0));
      const magnitudeB = Math.sqrt(b.reduce((sum, val) => sum + val * val, 0));
      return dotProduct / (magnitudeA * magnitudeB);
    }
  • Helper function to get embeddings using the transformer pipeline.
    export async function getEmbeddingSentTransformer(
      text: string,
      pipeline: FeatureExtractionPipeline
    ): Promise<number[]> {
      const embedding = await pipeline(text);
      return Array.from(embedding.data);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool returns 'top 10 results,' which is useful, but lacks critical details: it doesn't specify what 'requests' refer to (e.g., HTTP requests, user requests), how semantic search works, whether it's read-only or has side effects, or any rate limits or permissions required. This is inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: two sentences that directly state the tool's function and output. Every word earns its place, with no redundant or vague phrasing, making it efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a semantic search tool with no annotations and no output schema, the description is incomplete. It doesn't explain what 'requests' are, how results are ranked or formatted, or any error conditions. Without this context, an agent might struggle to use the tool correctly or interpret outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('page_url' and 'query') with clear descriptions. The description adds no additional parameter semantics beyond what's in the schema, such as format examples or constraints. Baseline 3 is appropriate when the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Semantically search for requests that occurred within a page URL.' It specifies the verb (search), resource (requests), and scope (within a page URL). However, it doesn't explicitly differentiate from sibling tools like 'make_http_request' or 'puppeteer_page_history', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'make_http_request' (for making requests) or 'puppeteer_navigate' (for page navigation), nor does it specify prerequisites or exclusions. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jwaldor/mcp-scrape-copilot'

If you have feedback or need assistance with the MCP directory API, please join our Discord server