Skip to main content
Glama

agent

Search, browse, and extract web data using natural language prompts. Specify data needs in plain English to gather structured information from multiple sources.

Instructions

Autonomous data gathering agent. Describe what you need in natural language and the agent will search, browse, and extract data. Costs variable credits.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesNatural language description of data to gather
schemaNoJSON schema for structured output
max_creditsNoMax credits to spend (default: 10)
max_sourcesNoMax sources to consult (default: 5)

Implementation Reference

  • Handler function for 'agent' tool - constructs request body from input parameters (prompt, schema, max_credits, max_sources) and calls apiPost('/agent', body) to make the API request, then returns formatted JSON result
    async ({ prompt, schema, max_credits, max_sources }) => {
      const body: Record<string, unknown> = { prompt, max_credits, max_sources };
      if (schema) body.schema = schema;
      return jsonResult(await apiPost("/agent", body));
    }
  • Input schema definition for 'agent' tool using Zod - defines prompt (required string), schema (optional record), max_credits (optional number, default 10), max_sources (optional number, default 5)
    {
      prompt: z.string().describe("Natural language description of data to gather"),
      schema: z.record(z.unknown()).optional().describe("JSON schema for structured output"),
      max_credits: z.number().optional().default(10).describe("Max credits to spend (default: 10)"),
      max_sources: z.number().optional().default(5).describe("Max sources to consult (default: 5)"),
    },
  • src/index.ts:206-220 (registration)
    Tool registration for 'agent' - registers the tool with name, description, input schema, and handler function using server.tool()
    server.tool(
      "agent",
      "Autonomous data gathering agent. Describe what you need in natural language and the agent will search, browse, and extract data. Costs variable credits.",
      {
        prompt: z.string().describe("Natural language description of data to gather"),
        schema: z.record(z.unknown()).optional().describe("JSON schema for structured output"),
        max_credits: z.number().optional().default(10).describe("Max credits to spend (default: 10)"),
        max_sources: z.number().optional().default(5).describe("Max sources to consult (default: 5)"),
      },
      async ({ prompt, schema, max_credits, max_sources }) => {
        const body: Record<string, unknown> = { prompt, max_credits, max_sources };
        if (schema) body.schema = schema;
        return jsonResult(await apiPost("/agent", body));
      }
    );
  • apiPost helper function - makes HTTP POST requests to the SearchClaw API with timeout handling, error handling, and JSON response parsing
    async function apiPost(path: string, body: Record<string, unknown>) {
      const controller = new AbortController();
      const timeout = setTimeout(() => controller.abort(), 30000);
      try {
        const response = await fetch(`${API_BASE}${path}`, {
          method: "POST",
          headers: { ...headers, "Content-Type": "application/json" },
          body: JSON.stringify(body),
          signal: controller.signal,
        });
        if (!response.ok) {
          const text = await response.text();
          throw new Error(`SearchClaw API error ${response.status}: ${text}`);
        }
        return response.json();
      } finally {
        clearTimeout(timeout);
      }
    }
  • jsonResult helper function - formats API response data into MCP tool result format with text content type
    function jsonResult(data: unknown) {
      return { content: [{ type: "text" as const, text: JSON.stringify(data, null, 2) }] };
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CSteenkamp/searchclaw-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server