Skip to main content
Glama

get_search_terms

Retrieve customer search terms that generated ad impressions, with attributed sales, clicks, and spend per ASIN for a specified date range.

Instructions

Read search terms (customer queries) that triggered ad impressions, with attributed sales, clicks, and spend by ASIN.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
start_dateNoStart of the date range, YYYY-MM-DD.
end_dateNoEnd of the date range, YYYY-MM-DD.

Implementation Reference

  • src/index.ts:53-58 (registration)
    Registration of the 'get_search_terms' tool with its name, description, and input schema (dateRangeSchema). The tool is declared but its actual handler logic is delegated to a remote hosted MCP endpoint.
    {
      name: "get_search_terms",
      description:
        "Read search terms (customer queries) that triggered ad impressions, with attributed sales, clicks, and spend by ASIN.",
      inputSchema: dateRangeSchema,
    },
  • Input schema for 'get_search_terms': accepts optional start_date and end_date strings in YYYY-MM-DD format.
    const dateRangeSchema = {
      type: "object" as const,
      properties: {
        start_date: {
          type: "string",
          format: "date",
          description: "Start of the date range, YYYY-MM-DD.",
        },
        end_date: {
          type: "string",
          format: "date",
          description: "End of the date range, YYYY-MM-DD.",
        },
      },
      additionalProperties: false,
    }
  • Handler for all tool calls. For any tool other than 'agentcentral_setup' (including 'get_search_terms'), it returns a generic notice telling the user to connect to the remote hosted MCP endpoint. The actual implementation lives on the remote server.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const name = request.params.name
      if (name === "agentcentral_setup") {
        return {
          content: [
            {
              type: "text",
              text:
                `Hosted MCP endpoint:\n  ${HOSTED_URL}\n\n` +
                `Setup guide:\n  ${SETUP_URL}\n\n` +
                `Add this to your client config:\n` +
                `{\n  "mcpServers": {\n    "agentcentral": {\n      "url": "${HOSTED_URL}",\n      "headers": { "Authorization": "Bearer ac_live_<YOUR_API_KEY>" }\n    }\n  }\n}`,
            },
          ],
          isError: false,
        }
      }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states it reads data. It omits behavioral traits such as data aggregation level, pagination, required permissions, or what happens if no search terms are found for the date range.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key attributes. Every word adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 date params, no output schema) and existence of siblings like 'get_keyword_performance', the description is adequate but fails to clarify differences or mention if results are limited (e.g., top terms). It lacks context on output structure but is minimally sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both 'start_date' and 'end_date' described with format and meaning. The description adds no extra semantics beyond the schema, earning the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it reads search terms (customer queries) that triggered ad impressions, with specific attributed metrics (sales, clicks, spend) by ASIN. The verb 'Read' and resource 'search terms' are specific, distinguishing it from keyword-level tools like 'get_keyword_performance'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It does not mention prerequisites, limitations, or scenarios where another tool like 'get_keyword_performance' would be more appropriate, leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentcentral-to/agent-central-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server