Skip to main content
Glama
sharozdawa

ai-visibility-mcp

check_brand_visibility

Analyze brand visibility across AI platforms by simulating queries to measure mention rates, positions, sentiment, and competitor presence.

Instructions

Check a brand's visibility across AI platforms (ChatGPT, Perplexity, Claude, Gemini). Simulates realistic queries and analyzes mention rates, positions, sentiment, and competitor landscape.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
brandYesThe brand name to check visibility for
keywordsNoIndustry keywords related to the brand (e.g., ['SEO', 'analytics']). Used to generate relevant queries.
platformsNoWhich AI platforms to check. Defaults to all four platforms.

Implementation Reference

  • The core logic (handler) for `check_brand_visibility` which simulates brand presence, sentiment, and context based on input parameters.
    export async function checkBrandVisibility(
      brand: string,
      query: string,
      platform: string,
      keywords: string[] = []
    ): Promise<CheckResult> {
      // Simulate network delay (50-200ms)
      await new Promise((resolve) =>
        setTimeout(resolve, 50 + Math.random() * 150)
      );
    
      // Create a deterministic-ish seed from inputs, but add time-based variance
      const timeBucket = Math.floor(Date.now() / 3600000); // changes every hour
      const seed = `${brand}-${query}-${platform}-${timeBucket}`;
      const rng = seededRandom(seed);
    
      // Platform-specific mention probability
      const platformBias: Record<string, number> = {
        chatgpt: 0.55,
        perplexity: 0.65, // Perplexity tends to cite more sources
        claude: 0.5,
        gemini: 0.6,
      };
    
      // Query type affects mention probability
      let mentionBoost = 0;
      const queryLower = query.toLowerCase();
      const brandLower = brand.toLowerCase();
    
      if (queryLower.includes(brandLower)) {
        mentionBoost = 0.3; // Direct brand queries almost always mention it
      }
      if (
        queryLower.includes("what is") ||
        queryLower.includes("review") ||
        queryLower.includes("pros and cons")
      ) {
        mentionBoost += 0.15;
      }
      if (queryLower.includes("alternative") || queryLower.includes("compare")) {
        mentionBoost += 0.1;
      }
    
      const baseProbability = platformBias[platform] || 0.5;
      const mentioned = rng() < Math.min(baseProbability + mentionBoost, 0.95);
    
      const competitorPool = getCompetitorPool(keywords).filter(
        (c) => c.toLowerCase() !== brandLower
      );
      const numCompetitors = Math.floor(rng() * 4) + 2;
      const competitors = pickMultiple(competitorPool, numCompetitors, rng);
    
      const keyword = keywords.length > 0 ? pickRandom(keywords, rng) : "software";
    
      if (mentioned) {
        // Determine sentiment
        const sentimentRoll = rng();
        let sentiment: "positive" | "neutral" | "negative";
        if (sentimentRoll < 0.5) sentiment = "positive";
        else if (sentimentRoll < 0.82) sentiment = "neutral";
        else sentiment = "negative";
    
        const templates = SENTIMENT_TEMPLATES[sentiment];
        const template = pickRandom(templates, rng);
    
        // Build the response
        const mainText = template
          .replace(/{brand}/g, brand)
          .replace(/{keyword}/g, keyword);
        const competitorMention =
          competitors.length > 0
            ? `\n\nOther notable tools in this space include ${competitors.join(", ")}.`
            : "";
        const fullResponse = mainText + competitorMention;
    
        // Find the position of the brand mention
        const position =
          rng() < 0.4 ? 1 : rng() < 0.7 ? 2 : Math.floor(rng() * 3) + 3;
    
        // Extract context around brand mention
        const contextStart = Math.max(
          0,
          mainText.indexOf(brand) - 50
        );
        const contextEnd = Math.min(
          mainText.length,
          mainText.indexOf(brand) + brand.length + 50
        );
        const context =
          (contextStart > 0 ? "..." : "") +
          mainText.slice(contextStart, contextEnd) +
          (contextEnd < mainText.length ? "..." : "");
    
        return {
          mentioned: true,
          position,
          context,
          fullResponse,
          sentiment,
          competitors,
        };
      } else {
        // Brand not mentioned
        const template = pickRandom(NOT_MENTIONED_TEMPLATES, rng);
        const competitorList = competitors.slice(0, 4).join(", ");
        const fullResponse = template
          .replace(/{keyword}/g, keyword)
          .replace(/{competitors}/g, competitorList)
          .replace(/{brand}/g, brand);
    
        return {
          mentioned: false,
          position: null,
          context: "",
          fullResponse,
          sentiment: "neutral",
          competitors,
        };
      }
    }
  • The TypeScript interface defining the output structure for the `check_brand_visibility` tool.
    export interface CheckResult {
      mentioned: boolean;
      position: number | null;
      context: string;
      fullResponse: string;
      sentiment: "positive" | "neutral" | "negative";
      competitors: string[];
    }
  • Registration of the `check_brand_visibility` tool within the MCP server implementation.
    // Tool: check_brand_visibility
    server.tool(
      "check_brand_visibility",
      "Check a brand's visibility across AI platforms (ChatGPT, Perplexity, Claude, Gemini). Simulates realistic queries and analyzes mention rates, positions, sentiment, and competitor landscape.",
      {
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'simulates realistic queries' and 'analyzes' various metrics, but doesn't specify whether this is a read-only operation, requires authentication, has rate limits, or provides structured output. The description lacks critical behavioral details needed for safe and effective invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose and scope, the second elaborates on the analysis components. There's minimal redundancy, though the second sentence could be slightly more concise by integrating the platform list more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, but no annotations or output schema, the description is moderately complete. It covers the purpose and analysis scope adequately but lacks behavioral transparency and output details. The absence of annotations and output schema means the description should compensate more than it does.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds marginal value by mentioning 'industry keywords' as examples and noting that platforms 'default to all four', but doesn't provide additional semantic context beyond what's in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check a brand's visibility'), the target resource ('across AI platforms'), and the scope of analysis ('mention rates, positions, sentiment, and competitor landscape'). It distinguishes this tool from siblings by focusing on multi-platform brand visibility analysis rather than single queries, comparisons, recommendations, scores, or platform listings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for brand visibility analysis across AI platforms, but it doesn't explicitly state when to use this tool versus alternatives like 'compare_brands' or 'get_visibility_score'. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate contexts from the tool's purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sharozdawa/ai-visibility'

If you have feedback or need assistance with the MCP directory API, please join our Discord server