Skip to main content
Glama
localseodata

Local SEO Data

Official

search_volume

Read-only

Retrieve monthly search volume, CPC, competition, and trend data for up to 1,000 keywords. Supports local SEO keyword research by geographic location and language.

Instructions

Get search volume and keyword metrics for up to 1000 keywords. Returns monthly search volume, CPC, competition, and trend data. Costs 1 credit per 50 keywords.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keywordsYesArray of keywords to analyze
locationYesGeographic location (e.g. "Orchard Park, NY")
languageNoLanguage code. Default: "en"

Implementation Reference

  • The handler function for the search_volume tool. Calls the API endpoint /v1/keywords/search-volume with keywords, location, and optional language, then formats the result.
    withErrorHandling(async ({ keywords, location, language }) => {
      const result = await callApi(
        "/v1/keywords/search-volume",
        { keywords, location, ...(language && { language }) },
        getAuth()
      );
      return { content: [{ type: "text" as const, text: formatResult(result.data, result) }] };
    })
  • Zod schema defining the input parameters for search_volume: keywords (array of 1-1000 strings), location (string), and optional language (string).
    {
      keywords: z.array(z.string().min(1)).min(1).max(1000).describe("Array of keywords to analyze"),
      location: z.string().min(1).describe('Geographic location (e.g. "Orchard Park, NY")'),
      language: z.string().optional().describe('Language code. Default: "en"'),
    },
    READ_ONLY,
  • Registration of the search_volume tool on the MCP server via server.tool(), inside registerKeywordTools().
    server.tool(
      "search_volume",
      "Get search volume and keyword metrics for up to 1000 keywords. Returns monthly search volume, CPC, competition, and trend data. Costs 1 credit per 50 keywords.",
      {
        keywords: z.array(z.string().min(1)).min(1).max(1000).describe("Array of keywords to analyze"),
        location: z.string().min(1).describe('Geographic location (e.g. "Orchard Park, NY")'),
        language: z.string().optional().describe('Language code. Default: "en"'),
      },
      READ_ONLY,
      withErrorHandling(async ({ keywords, location, language }) => {
        const result = await callApi(
          "/v1/keywords/search-volume",
          { keywords, location, ...(language && { language }) },
          getAuth()
        );
        return { content: [{ type: "text" as const, text: formatResult(result.data, result) }] };
      })
    );
  • Helper wrapper used to catch errors in the handler and return them as structured MCP error content.
    export function withErrorHandling<T>(
      fn: (args: T) => Promise<ToolResult>
    ): (args: T) => Promise<ToolResult> {
      return async (args) => {
        try {
          return await fn(args);
        } catch (err) {
          const message = err instanceof Error ? err.message : String(err);
          console.error(`[mcp] Tool error: ${message}`);
          return {
            content: [{ type: "text" as const, text: `Error: ${message}` }],
            isError: true,
          };
        }
      };
    }
  • Helper function that formats the API response data along with credit usage metadata into a human-readable string.
    export function formatResult(
      data: unknown,
      meta: { credits_used: number; credits_remaining: number; cached: boolean }
    ): string {
      const metaLine = `[${meta.credits_used} credit${meta.credits_used !== 1 ? "s" : ""} used | ${meta.credits_remaining} remaining${meta.cached ? " | cached" : ""}]`;
      return `${metaLine}\n\n${JSON.stringify(data, null, 2)}`;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds the credit cost (1 per 50 keywords) and lists return fields, which clarifies nondestructive behavior. However, it omits details like error handling or rate limits, but overall adds value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose and limit, second states return data and cost. Every sentence is informative with no waste. Front-loaded with key action and scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (3 params, no output schema, annotations present), the description covers the core functionality and credit cost. It lacks details on error cases or result structure but is sufficient for basic usage. Not exhaustive but adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for all 3 parameters. The description does not add meaning beyond what the schema provides (e.g., format of location, language code). Baseline is 3, and no extra semantic clarity is offered.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it gets search volume and keyword metrics for up to 1000 keywords, listing specific return fields (monthly search volume, CPC, competition, trend data). The verb 'Get' and resource 'search volume' are specific, and the limit distinguishes it from sibling tools like keyword_suggestions or related_keywords.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for bulk keyword analysis via the 1000-keyword limit and credit cost, but does not explicitly state when not to use it or compare to alternatives. The context of siblings provides differentiation, but explicit guidelines would be stronger.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/localseodata/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server