Skip to main content
Glama
sharozdawa

ai-visibility-mcp

compare_brands

Compare AI visibility across platforms for multiple brands. Analyze per-platform scores, overall rankings, and relative strengths to understand competitive positioning.

Instructions

Compare the AI visibility of multiple brands side by side. Shows per-platform scores, overall rankings, and relative strengths.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
brandsYesList of brand names to compare (2-10 brands)
keywordNoOptional industry keyword for context (e.g., 'project management')

Implementation Reference

  • The 'compare_brands' tool registration and handler implementation. It takes a list of brands and an optional keyword, generates simulated data for them, and returns a comparative analysis.
    server.tool(
      "compare_brands",
      "Compare the AI visibility of multiple brands side by side. Shows per-platform scores, overall rankings, and relative strengths.",
      {
        brands: z
          .array(z.string())
          .min(2)
          .max(10)
          .describe("List of brand names to compare (2-10 brands)"),
        keyword: z
          .string()
          .optional()
          .describe(
            "Optional industry keyword for context (e.g., 'project management')"
          ),
      },
      async ({ brands, keyword }) => {
        const timeBucket = Math.floor(Date.now() / 3600000);
        const keywords = keyword ? [keyword] : [];
        const brandResults: Array<{
          brand: string;
          overallScore: number;
          platformScores: Record<string, number>;
          mentionRates: Record<string, number>;
          topSentiment: string;
        }> = [];
    
        for (const brand of brands) {
          const platformResults: PlatformResult[] = [];
    
          for (const platformId of PLATFORM_IDS) {
            const rng = seededRandom(
              `${brand}-compare-${platformId}-${timeBucket}`
            );
            const queries = generateQueries(brand, keywords, 3, rng);
            const queryResults: QueryResult[] = [];
    
            for (const query of queries) {
              const queryRng = seededRandom(
                `${brand}-compare-${query}-${platformId}-${timeBucket}`
              );
              const result = simulateCheck(
                brand,
                query,
                platformId,
                keywords,
                queryRng
              );
              queryResults.push({ query, ...result });
            }
    
            platformResults.push(
              aggregatePlatformResult(platformId, queryResults)
            );
          }
    
          const overallScore = calculateScore(platformResults);
    
          const platformScores: Record<string, number> = {};
          const mentionRates: Record<string, number> = {};
          let totalPositive = 0;
          let totalNeutral = 0;
          let totalNegative = 0;
    
          for (const pr of platformResults) {
            platformScores[pr.platformName] = calculateScore([pr]);
            mentionRates[pr.platformName] =
              Math.round(pr.mentionRate * 100);
            totalPositive += pr.sentimentBreakdown.positive;
            totalNeutral += pr.sentimentBreakdown.neutral;
            totalNegative += pr.sentimentBreakdown.negative;
          }
    
          const topSentiment =
            totalPositive >= totalNeutral && totalPositive >= totalNegative
              ? "positive"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but lacks behavioral details. It doesn't disclose whether this is a read-only operation, requires authentication, has rate limits, or what happens with invalid brands. The description mentions outputs but not format or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with zero waste. First sentence states purpose and scope, second sentence details output components. Perfectly front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, but no annotations and no output schema, the description is adequate but has clear gaps. It explains what the tool does but lacks behavioral context and output format details that would be helpful given the absence of structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional meaning about parameters beyond what's in the schema (e.g., no examples of valid brand formats or keyword usage). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compare'), resource ('AI visibility of multiple brands'), and output format ('side by side... per-platform scores, overall rankings, and relative strengths'). It distinguishes from siblings like 'check_single_query' (single brand) and 'get_visibility_score' (single score).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for multi-brand comparison scenarios, but provides no explicit guidance on when to use this tool versus alternatives like 'check_brand_visibility' or 'get_recommendations'. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sharozdawa/ai-visibility'

If you have feedback or need assistance with the MCP directory API, please join our Discord server