Skip to main content
Glama

Server Details

AI search visibility audit — AEO, GEO, Agent Readiness scores with mention readiness.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Convrgent/aeo-scanner-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
audit_siteA
Read-only
Inspect

Full AI visibility audit across 58+ checks in 12 categories (4 AEO + 4 GEO + 4 Agent Readiness). Returns detailed per-check scores with specific issues and recommendations, AI Identity Card with mention readiness and detected competitors, and business profile. GEO checks include 3 research-backed citation signals: factual density, answer frontloading, and source citations. Does NOT generate fix code — use fix_site for that, or compare_sites to benchmark against a competitor. Requires API key ($1.00 per call). If you get payment_required, tell the user to set AEO_API_KEY.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL to audit
pagesNoNumber of pages to audit (1-10)
categoriesNoFilter to specific categories: structured_data, meta_technical, ai_accessibility, content_quality, brand_narrative, citation_readiness, authority_signals, entity_definition, machine_identity, api_discoverability, structured_actions, programmatic_access
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true and openWorldHint=true, the description adds critical behavioral context: the cost model ($1.00 per call), authentication requirements (API key), and detailed output structure (business profile interpretation). It does not mention rate limits or idempotency, but covers the essential operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) purpose and scope, (2) return values, (3) sibling differentiation, (4) cost/auth/error handling. Information is front-loaded with the core action, and every sentence provides distinct, non-redundant value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately explains return values ('detailed per-check scores,' 'AI Identity Card,' 'business profile'). It covers prerequisites (API key), cost, error handling, and sibling relationships, making it complete for a paid third-party audit tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds domain-specific semantic context by grouping the 12 categories into '4 AEO + 4 GEO + 4 Agent Readiness,' helping the agent understand the audit domains. However, it does not elaborate on the 'url' or 'pages' parameters beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool performs a 'Full AI visibility audit across 55+ checks in 12 categories' and specifies the output format (per-check scores, AI Identity Card, business profile). It clearly distinguishes itself from the sibling 'fix_site' by stating 'Does NOT generate fix code — use fix_site for that.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-not-to-use guidance ('Does NOT generate fix code — use fix_site for that') and names the correct alternative. Also includes specific error handling instructions ('If you get payment_required, tell the user to set AEO_API_KEY') and prerequisites ('Requires API key').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_sitesA
Read-only
Inspect

Competitive gap analysis — scans two sites concurrently, shows side-by-side scores, category-by-category winners, competitive gaps (checks where the competitor scored 20+ higher), and generated overtake fix code with projected scores after closing gaps. Use this when the user wants to benchmark against a competitor or when scan_site detects competitors in the AI Identity Card. Requires API key ($3.00 per call). If you get payment_required, tell the user to set AEO_API_KEY.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesYour site URL
pagesNoNumber of pages to scan per site (1-5)
competitorUrlYesCompetitor site URL to benchmark against
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=true. The description adds critical cost information ('$3.00 per call'), authentication requirements ('Requires API key'), and error handling guidance ('If you get payment_required, tell the user to set AEO_API_KEY'). It also details the output format (side-by-side scores, gap analysis, generated fix code) compensating for lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with purpose front-loaded, followed by usage conditions, then operational constraints. While information-dense, each clause earns its place. Minor deduction for the slightly verbose parenthetical '(checks where the competitor scored 20+ higher)' which could be tighter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (competitive analysis, code generation, paid API) and lack of output schema, the description adequately covers what the tool returns (scores, winners, gaps, fix code) and operational requirements (cost, auth). Could be improved with rate limit or latency expectations, but sufficient for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately 3. The description implicitly references the URL parameters ('scans two sites') but does not add semantic details, validation rules, or format guidance beyond what the schema already provides for the 'pages' parameter or URL formats.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly defines the tool as performing 'Competitive gap analysis' with specific actions (scans two sites concurrently, shows side-by-side scores, category winners, competitive gaps). It clearly distinguishes from siblings by implying scan_site detects competitors while this tool benchmarks against them, and contrasts with fix_site by generating 'fix code' as output rather than applying fixes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use conditions: 'when the user wants to benchmark against a competitor' and specifically references sibling tool scan_site ('when scan_site detects competitors in the AI Identity Card'). This creates clear trigger conditions that help the agent route correctly between scanning and comparing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fix_siteAInspect

Generate complete fix code for all AI visibility issues across AEO, GEO, and Agent Readiness. Returns working code you can apply directly — schema generation, robots.txt, sitemap, llms.txt, meta tags, structured data, citation signals, entity markup. Also returns two-tier score projections: quick wins (critical + high fixes only) and full implementation ceiling (all fixes). Content recommendations include research citations. Run scan_site first to see which issues exist. Requires API key ($5.00 per call). If you get payment_required, tell the user to set AEO_API_KEY.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL to generate fixes for
pagesNoNumber of pages to analyze (1-10)
formatNoOutput format: generic or claude_code (optimized for Claude Code)generic
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the openWorldHint annotation, the description richly discloses behavioral traits: exact cost per call, comprehensive output details (listing 7+ specific code types returned and score projection tiers), and explicit instruction that it returns code to apply rather than modifying the site directly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense and well-structured (purpose → outputs → workflow → cost → errors). While slightly lengthy, every sentence provides distinct value. The enumerated list of code types is necessary for scope clarity, though slightly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (external API dependency, monetary cost, multi-format output), the description is remarkably complete. It compensates for the lack of output schema by detailing return values (specific code types, score projections) and covers prerequisites, costs, and error states.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (url, pages, format). The description focuses on outputs and workflow rather than adding parameter-specific semantics, which is acceptable given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates 'complete fix code for all AI visibility issues across AEO, GEO, and Agent Readiness,' providing specific verbs, resources, and scope. It distinguishes itself from sibling 'scan_site' by explicitly noting to 'Run scan_site first,' establishing the workflow relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance includes explicit workflow instruction ('Run scan_site first to see which issues exist'), clear cost disclosure ('Requires API key ($5.00 per call)'), and error handling protocol ('If you get payment_required, tell the user to set AEO_API_KEY').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_siteA
Read-only
Inspect

Quick AI visibility scan. Returns three scores: AEO Score (0-100, AI search engine findability), GEO Score (0-100, AI citation readiness), and Agent Readiness Score (0-100, AI agent interaction capability). Also returns AI Identity Card with mention readiness (0-100, predicts how likely AI will mention the brand), detected competitors, business profile (commerce/saas/media/general), and top 5 issues. 58+ checks across 12 categories. Free — no API key needed. Does NOT return per-check details or fix code — use audit_site for full breakdown, fix_site for generated fixes, compare_sites to benchmark against a competitor.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL to scan (e.g. https://example.com)
pagesNoNumber of pages to scan (1-5)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=true. The description adds valuable behavioral context not in annotations: the no-cost/auth requirement ('Free — no API key needed'), semantic definitions of the scoring metrics (AEO, GEO, Agent Readiness), and scope limitations (returns summary scores, not per-check details). It does not mention rate limits or error behaviors, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with zero waste: opening purpose statement, detailed return value specification, and constraint/alternative guidance in the final sentence. Every clause provides distinct information (scope, return structure, cost, exclusions) without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description comprehensively details the return structure (three specific scores with ranges, identity card, business profile classification, top 5 issues). Given the simple 2-parameter input and read-only nature, the description provides sufficient context for correct agent invocation and result handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for both parameters (url and pages), the schema fully documents the input requirements. The description does not add parameter-specific semantics, but given the high schema coverage, the baseline score of 3 is appropriate as the schema carries the burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Quick AI visibility scan' and clearly enumerates the returned artifacts (three scores, AI Identity Card, business profile, top 5 issues). It explicitly distinguishes this tool from siblings by contrasting it with audit_site (full breakdown) and fix_site (generated fixes), ensuring the agent understands the scope boundary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-not guidance: 'Does NOT return per-check details or fix code' and names the exact alternatives to use instead ('use audit_site for full breakdown, fix_site for generated fixes'). Also adds prerequisite context 'Free — no API key needed' which informs invocation requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.