Skip to main content
Glama
localseodata

Local SEO Data

Official

reputation_audit

Read-only

Audit your online reputation across review platforms. Get a reputation score, sentiment analysis, response rate, and recommendations for improvement.

Instructions

Audit online reputation across review platforms. Returns a reputation score, sentiment analysis (positive/negative themes), response rate, and recommendations. Costs 30 credits. Note: this tool queries multiple review sources and may take 10-30 seconds to return — this is normal, not an error.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
business_nameYesBusiness name
locationYesCity and state

Implementation Reference

  • The reputation_audit tool handler: calls API endpoint /v1/audit/reputation with business_name and location, returns formatted results. Uses a 120-second timeout.
      withErrorHandling(async ({ business_name, location }) => {
        const result = await callApi(
          "/v1/audit/reputation",
          { business_name, location },
          getAuth(),
          120_000
        );
        return { content: [{ type: "text" as const, text: formatResult(result.data, result) }] };
      })
    );
  • Input schema for reputation_audit: requires business_name (string) and location (string).
    {
      business_name: z.string().describe("Business name"),
      location: z.string().describe("City and state"),
    },
  • Tool registration via server.tool('reputation_audit', ...) inside registerAuditTools function. Description notes 30 credit cost and 10-30 second response time.
    server.tool(
      "reputation_audit",
      "Audit online reputation across review platforms. Returns a reputation score, sentiment analysis (positive/negative themes), response rate, and recommendations. Costs 30 credits. Note: this tool queries multiple review sources and may take 10-30 seconds to return — this is normal, not an error.",
      {
        business_name: z.string().describe("Business name"),
        location: z.string().describe("City and state"),
      },
      READ_ONLY,
      withErrorHandling(async ({ business_name, location }) => {
        const result = await callApi(
          "/v1/audit/reputation",
          { business_name, location },
          getAuth(),
          120_000
        );
        return { content: [{ type: "text" as const, text: formatResult(result.data, result) }] };
      })
    );
  • pollAsyncJob helper: polls an async job URL until 'complete' or 'failed' status, with exponential backoff up to 3 minutes. Not used by reputation_audit (which is synchronous), but a shared helper in the same module.
    async function pollAsyncJob(
      pollUrl: string,
      auth: string,
      maxWaitMs: number = 180_000
    ): Promise<{ data: unknown; credits_used: number; credits_remaining: number; cached: boolean }> {
      const start = Date.now();
      let delay = 2000;
    
      while (Date.now() - start < maxWaitMs) {
        await new Promise((r) => setTimeout(r, delay));
        const result = await callApiGet(pollUrl, auth);
        const data = result.data as Record<string, unknown>;
    
        if (data.status === "complete") {
          return result;
        }
        if (data.status === "failed") {
          throw new Error((data.error as string) || "Audit job failed");
        }
        // Still pending/running — increase delay up to 4s
        delay = Math.min(delay * 1.3, 4000);
      }
    
      throw new Error("Audit timed out after 3 minutes. The job may still be processing — try polling the status URL.");
    }
  • withErrorHandling wrapper and formatResult helper used by the reputation_audit handler to catch errors and format API responses.
    export function withErrorHandling<T>(
      fn: (args: T) => Promise<ToolResult>
    ): (args: T) => Promise<ToolResult> {
      return async (args) => {
        try {
          return await fn(args);
        } catch (err) {
          const message = err instanceof Error ? err.message : String(err);
          console.error(`[mcp] Tool error: ${message}`);
          return {
            content: [{ type: "text" as const, text: `Error: ${message}` }],
            isError: true,
          };
        }
      };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, openWorldHint) already indicate safe, non-deterministic read. Description adds value by disclosing 30-credit cost, querying multiple sources, and 10-30 second delay as normal. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four concise sentences with clear structure: purpose, outputs, cost, and timing note. Every sentence adds value, no redundancy. Front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given two simple params and no output schema, description fully covers what the tool does, what it returns, and highlights the notable delay. Sufficient for an agent to use correctly without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions ('Business name', 'City and state'). Description does not add further semantic detail beyond schema. Meets baseline but doesn't exceed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Audit' and clear resource 'online reputation across review platforms,' listing outputs (reputation score, sentiment, response rate, recommendations). Differentiates from siblings like google_reviews (specific platform) and multi_platform_reviews (likely broader aggregation with different outputs).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States credit cost and expected latency, subtly guiding usage for non-urgent cases. Does not explicitly compare to alternatives or state when not to use, but the timing note helps set expectations. Slightly lacking exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/localseodata/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server