Skip to main content
Glama

search

Search the web to find information and return relevant results for queries, supporting AI agents with web data retrieval.

Instructions

Search the web and return results ($0.002)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
countNo

Implementation Reference

  • index.js:50-79 (handler)
    General tool handler function that executes requests to the IteraTools API.
    async function callTool(endpoint, params) {
      const fetch = (await import('node-fetch')).default;
      const isGet = ['GET'].includes((TOOLS.find(t => t.endpoint === endpoint) || {}).method);
      
      const url = isGet 
        ? `${BASE_URL}${endpoint}?${new URLSearchParams(params)}`
        : `${BASE_URL}${endpoint}`;
      
      const res = await fetch(url, {
        method: isGet ? 'GET' : 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': `Bearer ${API_KEY}`,
        },
        body: isGet ? undefined : JSON.stringify(params),
      });
      
      const text = await res.text();
      let data;
      try { data = JSON.parse(text); } catch { data = { raw: text }; }
      
      if (!res.ok) {
        if (res.status === 402) {
          throw new Error(`Insufficient credits. Add credits at https://iteratools.com. Cost: ${TOOLS.find(t=>t.endpoint===endpoint)?.price || 'see docs'}`);
        }
        throw new Error(`API error ${res.status}: ${text.substring(0, 200)}`);
      }
      
      return data;
    }
  • index.js:30-30 (registration)
    Tool definition for "search" within the TOOLS configuration array.
    { name: 'search', description: 'Search the web and return results', inputSchema: { type: 'object', properties: { query: { type: 'string' }, count: { type: 'number', default: 5 } }, required: ['query'] }, endpoint: '/search', price: '$0.002' },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description includes cost information ('$0.002') which is valuable behavioral context not present in the schema or annotations. However, with no annotations provided, the description fails to disclose other important behavioral traits such as whether the operation is read-only, rate limits, authentication requirements, or the format/structure of the returned results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single efficient sentence where every element serves a distinct purpose: 'Search' defines the action, 'the web' identifies the resource, 'return results' describes the output, and '($0.002)' provides cost context. There is no redundant or extraneous text, and the information is front-loaded with the core action stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complete absence of schema descriptions for both parameters and the lack of an output schema or annotations, the description is insufficiently complete. While it conveys the basic operation and cost, it fails to document critical implementation details such as parameter usage, return value structure, or error handling that an agent would need to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero percent schema description coverage, the description bears full responsibility for explaining parameter semantics, yet it completely omits any mention of the 'query' or 'count' parameters. There is no explanation of what constitutes a valid query string, expected format, or that the count parameter controls the number of results returned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a web search and returns results, specifying the exact resource ('the web') and action. However, it does not differentiate this tool from siblings like 'scrape' or 'browser_act' which also interact with web content, potentially causing confusion about when to use search versus direct content extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternative tools such as 'scrape' for URL-specific extraction or 'browser_act' for browser automation. There are no stated prerequisites, constraints, or conditions that would help an agent select this tool appropriately from the available web-interaction siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fredpsantos33/itera-tools-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server