Skip to main content
Glama
sharozdawa

ai-visibility-mcp

get_visibility_score

Calculate AI visibility scores for brands across ChatGPT, Perplexity, Claude, and Gemini to track performance and identify improvement opportunities.

Instructions

Calculate an overall AI visibility score (0-100) for a brand across all four AI platforms. Includes per-platform breakdowns and improvement recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
brandYesThe brand name to score

Implementation Reference

  • The handler for get_visibility_score tool, which calculates an AI visibility score for a brand across multiple platforms.
    server.tool(
      "get_visibility_score",
      "Calculate an overall AI visibility score (0-100) for a brand across all four AI platforms. Includes per-platform breakdowns and improvement recommendations.",
      {
        brand: z.string().describe("The brand name to score"),
      },
      async ({ brand }) => {
        const timeBucket = Math.floor(Date.now() / 3600000);
        const results: PlatformResult[] = [];
    
        for (const platformId of PLATFORM_IDS) {
          const rng = seededRandom(`${brand}-score-${platformId}-${timeBucket}`);
          const queries = generateQueries(brand, [], 3, rng);
          const queryResults: QueryResult[] = [];
    
          for (const query of queries) {
            const queryRng = seededRandom(
              `${brand}-score-${query}-${platformId}-${timeBucket}`
            );
            const result = simulateCheck(brand, query, platformId, [], queryRng);
            queryResults.push({ query, ...result });
          }
    
          results.push(aggregatePlatformResult(platformId, queryResults));
        }
    
        const overallScore = calculateScore(results);
    
        // Generate tier
        let tier: string;
        let tierDescription: string;
        if (overallScore >= 80) {
          tier = "Excellent";
          tierDescription =
            "Your brand has strong visibility across AI platforms. Focus on maintaining and expanding your presence.";
        } else if (overallScore >= 60) {
          tier = "Good";
          tierDescription =
            "Your brand is visible on most platforms but has room for improvement. Targeted optimization can boost your score.";
        } else if (overallScore >= 40) {
          tier = "Moderate";
          tierDescription =
            "Your brand appears in some AI responses but is missing from many. Significant optimization opportunities exist.";
        } else if (overallScore >= 20) {
          tier = "Low";
          tierDescription =
            "Your brand has limited AI visibility. A comprehensive strategy is needed to improve presence across platforms.";
        } else {
          tier = "Very Low";
          tierDescription =
            "Your brand is rarely mentioned by AI platforms. Immediate action is recommended to establish AI visibility.";
        }
    
        const platformScores = results.map((r) => ({
          platform: r.platformName,
          score: calculateScore([r]),
          mentionRate: `${Math.round(r.mentionRate * 100)}%`,
          averagePosition: r.averagePosition,
          topSentiment:
            r.sentimentBreakdown.positive >= r.sentimentBreakdown.neutral &&
            r.sentimentBreakdown.positive >= r.sentimentBreakdown.negative
              ? "positive"
              : r.sentimentBreakdown.neutral >= r.sentimentBreakdown.negative
              ? "neutral"
              : "negative",
        }));
    
        // Select relevant recommendations based on score
        const numRecs = overallScore >= 60 ? 4 : overallScore >= 30 ? 6 : 8;
        const rng = seededRandom(`${brand}-recs-${timeBucket}`);
        const selectedRecs = pickMultiple(RECOMMENDATIONS, numRecs, rng).map(
          (r) => ({
            ...r,
            recommendation: r.recommendation.replace(/{brand}/g, brand),
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the output includes 'per-platform breakdowns and improvement recommendations,' which adds some context about what the tool returns. However, it doesn't address critical behavioral aspects like whether this is a read-only operation, if it requires authentication, rate limits, computational cost, or how it handles errors. The description is insufficient for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: a single sentence states the core purpose, followed by additional context about what's included. Every word earns its place, with no redundant or vague phrasing. It efficiently communicates the tool's value in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (calculating scores across platforms with recommendations), lack of annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and output components but lacks details on behavioral traits, error handling, or usage context. The description meets the bare minimum but leaves significant gaps for an agent to operate effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'brand' clearly documented as 'The brand name to score.' The description doesn't add any meaningful semantics beyond this—it doesn't clarify format requirements, examples, or constraints. With high schema coverage and only one parameter, the baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Calculate an overall AI visibility score (0-100) for a brand across all four AI platforms.' It specifies the verb ('calculate'), resource ('AI visibility score'), and scope ('across all four AI platforms'). However, it doesn't explicitly differentiate from sibling tools like 'check_brand_visibility' or 'compare_brands', which likely have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the tool includes 'per-platform breakdowns and improvement recommendations,' but doesn't indicate when this comprehensive approach is preferred over simpler sibling tools like 'check_single_query' or 'list_platforms.' No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sharozdawa/ai-visibility'

If you have feedback or need assistance with the MCP directory API, please join our Discord server