ai-visibility-scanner
Server Details
Scan websites for AI visibility and marketing health with interactive dashboard.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- edelgad3/ai-visibility-scanner
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 5 of 5 tools scored. Lowest: 2.4/5.
Each tool has a clearly distinct purpose with no ambiguity. compare_scan focuses on competitor comparisons, get_score_breakdown provides detailed analytics, refresh_scan handles re-scanning, scan_website performs the core scanning function, and submit_lead manages lead submission. There is no overlap in functionality.
All tools follow a consistent verb_noun naming pattern (e.g., compare_scan, get_score_breakdown, refresh_scan, scan_website, submit_lead). The pattern is uniform throughout, with no mixing of conventions or styles.
With 5 tools, this server is well-scoped for its AI visibility scanning purpose. Each tool serves a specific role in the scanning and analysis workflow, from initial scans to comparisons and lead management, making the count appropriate and efficient.
The tool set covers the core scanning and analysis lifecycle well, including scanning, refreshing, comparing, and detailed breakdowns, with lead submission for service follow-up. A minor gap might be the absence of tools for managing or deleting scans or leads, but agents can work around this with the provided tools.
Available Tools
5 toolscompare_scanCInspect
Scan a competitor and return side-by-side comparison
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| max_pages | No | ||
| competitor_url | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions scanning and returning a comparison but fails to describe key traits such as whether this is a read-only operation, potential rate limits, authentication needs, or what the comparison output entails (e.g., format, scope). This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is appropriately sized and front-loaded, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a scanning/comparison tool with 3 parameters, no annotations, and no output schema), the description is incomplete. It doesn't explain what the scan entails, what is compared, or what the return value looks like, leaving the agent with insufficient context to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'competitor' and 'side-by-side comparison', which loosely relates to 'url' and 'competitor_url', but doesn't explain the meaning or usage of 'max_pages' or provide any details beyond what the schema's property names imply. This adds minimal value over the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'scan[s] a competitor and return[s] side-by-side comparison', which provides a general purpose (scanning and comparing) but lacks specificity about what is being scanned or compared. It doesn't distinguish this tool from sibling tools like 'scan_website' or 'refresh_scan', making the purpose somewhat vague rather than clearly differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'scan_website' or 'refresh_scan'. The description implies usage for competitor comparison but doesn't specify contexts, prerequisites, or exclusions, leaving the agent with no clear decision-making criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_score_breakdownCInspect
Get detailed line-item breakdown for a score dimension
| Name | Required | Description | Default |
|---|---|---|---|
| checks | Yes | ||
| dimension | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does ('Get detailed line-item breakdown') but lacks critical behavioral details such as whether it's a read-only operation, potential side effects, error handling, or output format. This is inadequate for a tool with parameters and no structured safety hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It's appropriately sized for the tool's complexity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 required parameters, 0% schema description coverage, no annotations, and no output schema, the description is insufficient. It doesn't explain what a 'score dimension' entails, what 'checks' represents, or what the breakdown output looks like, leaving significant gaps for an AI agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'score dimension' and 'line-item breakdown,' which loosely relate to the 'dimension' parameter (with enum values: geo, multimodal, agent_ready) and 'checks' parameter, but doesn't explain their meanings, formats, or how they interact. This adds minimal value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and target ('detailed line-item breakdown for a score dimension'), making the purpose understandable. However, it doesn't differentiate this tool from its siblings (compare_scan, refresh_scan, scan_website, submit_lead), which appear to be related to scanning/assessment but have different functions, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like compare_scan or refresh_scan. The description implies usage for obtaining breakdowns, but it doesn't specify prerequisites, exclusions, or contextual triggers, leaving the agent with minimal direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
refresh_scanCInspect
Re-scan with updated parameters
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| max_pages | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. 'Re-scan' implies a potentially resource-intensive operation, but the description doesn't mention performance characteristics, rate limits, authentication needs, or what happens to previous scan results. It doesn't specify whether this is a read-only or mutating operation, or what the expected output format might be.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at just 4 words, which is appropriately brief for a simple tool. However, this brevity comes at the cost of clarity - it's under-specified rather than efficiently informative. The structure is front-loaded but lacks necessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters, 0% schema coverage, no annotations, and no output schema, the description is inadequate. It doesn't explain what the tool returns, what 're-scan' means operationally, or how it differs from initial scanning. The agent would struggle to use this tool correctly without additional context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for both parameters, the description must compensate but fails to do so. 'updated parameters' vaguely references the parameters but doesn't explain what 'url' represents (the target to re-scan) or what 'max_pages' controls. The description adds minimal value beyond what's already apparent from parameter names alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Re-scan with updated parameters' states a vague purpose - it indicates performing a scan operation again with different parameters, but doesn't specify what resource is being scanned or what 'updated parameters' means. It distinguishes somewhat from 'scan_website' by implying this is a re-scan rather than initial scan, but lacks specificity about what exactly gets re-scanned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'scan_website' or 'compare_scan'. It doesn't indicate prerequisites (e.g., whether an initial scan must exist), appropriate contexts, or exclusions. The agent receives no help in choosing between sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_websiteARead-onlyInspect
Scan any website for AI visibility and marketing health. Returns scores for GEO (Generative Engine Optimization), Multimodal readiness, Agent-Ready infrastructure, and 6-dimension Marketing Health. Identifies critical findings with prioritized fix recommendations and revenue impact estimates.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The website URL to scan (e.g. https://example.com) | |
| industry | No | Industry vertical for context | general |
| max_pages | No | Maximum subpages to scan (default: 5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, and destructiveHint=false, covering safety and scope. The description adds valuable behavioral context by specifying the comprehensive nature of the scan (multiple dimensions, critical findings, recommendations, impact estimates) and the scanning scope (website-wide), which goes beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that pack substantial information: the first states the action and high-level returns, the second details the specific score dimensions and outputs. Every word contributes to understanding the tool's value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (comprehensive scanning with multiple output dimensions), rich annotations, and 100% schema coverage, the description provides strong context about what the tool does and returns. However, without an output schema, it could benefit from more detail about the exact structure of the returned scores and recommendations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all three parameters (url, industry, max_pages). The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Scan any website') and the comprehensive scope of what it returns (AI visibility, marketing health scores across multiple dimensions, critical findings with recommendations and impact estimates). It distinguishes itself from siblings by focusing on a full website scan rather than comparison, breakdown, refresh, or lead submission operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying what the tool scans and returns, but doesn't explicitly state when to use it versus alternatives like compare_scan or get_score_breakdown. It provides clear purpose but lacks explicit guidance on tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_leadCInspect
Submit a lead for service booking
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| tier | Yes | ||
| Yes | |||
| company | No | ||
| scan_url | Yes | ||
| findings_count | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Submit a lead' implies a write operation, but the description doesn't disclose critical traits like authentication requirements, rate limits, side effects (e.g., email notifications), or what happens after submission (e.g., confirmation). For a mutation tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste—'Submit a lead for service booking' is front-loaded and directly states the tool's function. Every word earns its place, making it highly concise and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a mutation tool with 6 parameters, no annotations, and no output schema), the description is incomplete. It doesn't explain the return values, error conditions, or how parameters interact (e.g., 'tier' enums affect service booking). While concise, it lacks the depth needed for an AI agent to confidently invoke the tool without guessing at behavioral or parametric details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'lead' and 'service booking', which loosely relates to parameters like 'tier' or 'scan_url', but adds no specific meaning about what each parameter represents (e.g., 'tier' options map to service levels, 'scan_url' is for analysis). With 6 parameters (4 required) and no schema descriptions, the description provides minimal semantic value beyond the tool's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('submit a lead') and the purpose ('for service booking'), which is specific and actionable. It distinguishes this tool from sibling tools like 'scan_website' or 'compare_scan' by focusing on lead submission rather than scanning or analysis. However, it doesn't explicitly mention what resource or system the lead is submitted to, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'scan_website' or 'refresh_scan'. It doesn't mention prerequisites (e.g., needing a scan first), exclusions, or specific contexts for lead submission. The phrase 'for service booking' hints at a use case but lacks explicit when/when-not instructions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!