Skip to main content
Glama
sharozdawa

ai-visibility-mcp

Server Quality Checklist

92%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose with no overlap: checking visibility across platforms, checking a single query, comparing brands, getting recommendations, calculating scores, and listing platforms. The descriptions reinforce distinct use cases, making misselection unlikely.

    Naming Consistency5/5

    All tools follow a consistent verb_noun naming pattern (e.g., check_brand_visibility, get_recommendations, list_platforms). The verbs are descriptive and appropriate for each tool's function, with no deviations in style or convention.

    Tool Count5/5

    With 6 tools, the server is well-scoped for its AI visibility domain. Each tool serves a specific and necessary function, from monitoring to analysis to recommendations, without being overly sparse or bloated.

    Completeness5/5

    The toolset provides complete coverage for the AI visibility domain, including monitoring (check_brand_visibility, check_single_query), analysis (compare_brands, get_visibility_score), recommendations (get_recommendations), and platform information (list_platforms). No obvious gaps exist for core workflows.

  • Average 3.2/5 across 6 of 6 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 6 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'actionable recommendations' and 'prioritized suggestions', implying a read-only operation that returns advice, but does not disclose critical details like whether it performs a 'quick check' (as hinted in the schema), rate limits, authentication needs, or what happens if the brand is unknown. For a tool with no annotations, this is a significant gap.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is concise and front-loaded, consisting of two clear sentences that efficiently convey the tool's purpose and prioritization logic. There is no wasted text, though it could be slightly more structured by explicitly mentioning the 'brand' parameter.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is minimally adequate. It covers the core purpose but lacks usage guidelines, behavioral details, and output information. With no annotations to fill gaps, the description should do more to be complete, but it meets a basic threshold.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents both parameters ('brand' and 'current_score'). The description adds no parameter-specific semantics beyond implying that recommendations are based on 'current score and industry best practices', which aligns with the schema but does not provide additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Get actionable recommendations to improve a brand's AI visibility' with 'Prioritized suggestions based on current score and industry best practices.' It specifies the verb ('Get'), resource ('recommendations'), and scope ('improve a brand's AI visibility'), but does not explicitly differentiate from sibling tools like 'check_brand_visibility' or 'get_visibility_score', which focus on assessment rather than improvement suggestions.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It mentions 'current score and industry best practices' but does not specify prerequisites, exclusions, or when to choose this over siblings such as 'check_brand_visibility' or 'compare_brands'. This leaves the agent without clear usage context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the output includes 'per-platform breakdowns and improvement recommendations,' which adds some context about what the tool returns. However, it doesn't address critical behavioral aspects like whether this is a read-only operation, if it requires authentication, rate limits, computational cost, or how it handles errors. The description is insufficient for a tool with no annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise and front-loaded: a single sentence states the core purpose, followed by additional context about what's included. Every word earns its place, with no redundant or vague phrasing. It efficiently communicates the tool's value in minimal space.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (calculating scores across platforms with recommendations), lack of annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and output components but lacks details on behavioral traits, error handling, or usage context. The description meets the bare minimum but leaves significant gaps for an agent to operate effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema description coverage is 100%, with the single parameter 'brand' clearly documented as 'The brand name to score.' The description doesn't add any meaningful semantics beyond this—it doesn't clarify format requirements, examples, or constraints. With high schema coverage and only one parameter, the baseline score of 3 is appropriate as the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Calculate an overall AI visibility score (0-100) for a brand across all four AI platforms.' It specifies the verb ('calculate'), resource ('AI visibility score'), and scope ('across all four AI platforms'). However, it doesn't explicitly differentiate from sibling tools like 'check_brand_visibility' or 'compare_brands', which likely have overlapping functionality.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. It mentions the tool includes 'per-platform breakdowns and improvement recommendations,' but doesn't indicate when this comprehensive approach is preferred over simpler sibling tools like 'check_single_query' or 'list_platforms.' No exclusions or prerequisites are stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It mentions the output includes 'details about how each sources and presents brand information', which adds some behavioral context about the return format. However, it doesn't disclose critical traits like whether this is a read-only operation, potential rate limits, authentication needs, or pagination behavior for a list operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence that front-loads the core action ('List all supported AI platforms') and adds necessary detail about the output. Every word earns its place with zero redundancy or fluff.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is adequate but has gaps. It explains what the tool returns but lacks behavioral context (e.g., safety, performance). For a list operation with no structured output schema, more detail on return format would help, though the mention of 'details' provides some guidance.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description appropriately doesn't waste space on parameters, maintaining focus on the tool's purpose. Baseline for 0 parameters is 4, as no parameter semantics are needed.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb ('List') and resource ('supported AI platforms'), and specifies the scope ('with details about how each sources and presents brand information'). It distinguishes from siblings by focusing on platform metadata rather than brand analysis operations. However, it doesn't explicitly contrast with specific sibling tools, keeping it at 4 rather than 5.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus the sibling tools. While it implies this is for platform discovery rather than brand checking, it doesn't specify scenarios, prerequisites, or alternatives. The agent must infer usage from the purpose alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions what the tool returns, it doesn't describe important behavioral aspects like rate limits, authentication requirements, error conditions, whether it makes external API calls, or how fresh the data is. The description provides basic output information but lacks operational context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is perfectly concise - a single sentence that efficiently communicates the tool's purpose and return values. Every word earns its place with no redundancy or unnecessary elaboration.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with 3 parameters and no output schema, the description provides basic purpose and return information but lacks important context. Without annotations and with no output schema, the description should ideally explain more about the return structure, error handling, and operational constraints to be truly complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the input schema already fully documents all three parameters. The description doesn't add any meaningful parameter semantics beyond what's in the schema - it mentions the parameters generically but provides no additional context about format constraints, examples beyond the schema, or edge cases.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Check if a brand is mentioned'), the resource ('for a specific query on a specific AI platform'), and distinguishes it from siblings by specifying it's a single-query check rather than broader visibility analysis. It explicitly mentions the return values which further clarifies the purpose.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus the sibling tools like check_brand_visibility, compare_brands, or get_visibility_score. It doesn't mention prerequisites, limitations, or alternative scenarios where other tools might be more appropriate.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'simulates realistic queries' and 'analyzes' various metrics, but doesn't specify whether this is a read-only operation, requires authentication, has rate limits, or provides structured output. The description lacks critical behavioral details needed for safe and effective invocation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured in two sentences: the first states the core purpose and scope, the second elaborates on the analysis components. There's minimal redundancy, though the second sentence could be slightly more concise by integrating the platform list more smoothly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with 3 parameters, 100% schema coverage, but no annotations or output schema, the description is moderately complete. It covers the purpose and analysis scope adequately but lacks behavioral transparency and output details. The absence of annotations and output schema means the description should compensate more than it does.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds marginal value by mentioning 'industry keywords' as examples and noting that platforms 'default to all four', but doesn't provide additional semantic context beyond what's in the schema. This meets the baseline for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Check a brand's visibility'), the target resource ('across AI platforms'), and the scope of analysis ('mention rates, positions, sentiment, and competitor landscape'). It distinguishes this tool from siblings by focusing on multi-platform brand visibility analysis rather than single queries, comparisons, recommendations, scores, or platform listings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage for brand visibility analysis across AI platforms, but it doesn't explicitly state when to use this tool versus alternatives like 'compare_brands' or 'get_visibility_score'. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate contexts from the tool's purpose.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but lacks behavioral details. It doesn't disclose whether this is a read-only operation, requires authentication, has rate limits, or what happens with invalid brands. The description mentions outputs but not format or limitations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two concise sentences with zero waste. First sentence states purpose and scope, second sentence details output components. Perfectly front-loaded and appropriately sized.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with 2 parameters, 100% schema coverage, but no annotations and no output schema, the description is adequate but has clear gaps. It explains what the tool does but lacks behavioral context and output format details that would be helpful given the absence of structured output documentation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional meaning about parameters beyond what's in the schema (e.g., no examples of valid brand formats or keyword usage). Baseline 3 is appropriate when schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('compare'), resource ('AI visibility of multiple brands'), and output format ('side by side... per-platform scores, overall rankings, and relative strengths'). It distinguishes from siblings like 'check_single_query' (single brand) and 'get_visibility_score' (single score).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage for multi-brand comparison scenarios, but provides no explicit guidance on when to use this tool versus alternatives like 'check_brand_visibility' or 'get_recommendations'. No exclusions or prerequisites are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

ai-visibility MCP server

Copy to your README.md:

Score Badge

ai-visibility MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sharozdawa/ai-visibility'

If you have feedback or need assistance with the MCP directory API, please join our Discord server