Skip to main content
Glama

Server Details

AEO audit: score any website 0-100 for AI visibility. Checks schema, meta, content, AI crawlers.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
piiiico/aeo-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: analyze_aeo provides a comprehensive audit with detailed breakdowns, check_ai_readiness focuses on technical configuration for AI crawlers, and get_aeo_score offers a quick score-only check. There is no overlap in functionality, and the descriptions clearly differentiate when to use each tool.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (analyze_aeo, check_ai_readiness, get_aeo_score), using snake_case throughout. The verbs (analyze, check, get) are appropriately descriptive and aligned with the tool's specific function, creating a predictable and readable naming scheme.

Tool Count5/5

With 3 tools, this server is well-scoped for its AEO audit domain. Each tool serves a distinct and necessary role: comprehensive analysis, technical readiness check, and quick score retrieval. This count avoids bloat while covering the core workflows an agent would need for AEO assessment.

Completeness4/5

The tool set covers the essential AEO audit lifecycle: quick scoring, detailed analysis, and technical configuration checks. A minor gap exists in lacking tools for implementing recommendations (e.g., fix_issues or update_schema), but agents can work around this by using the audit results to guide manual actions.

Available Tools

3 tools
analyze_aeoAInspect

Run a full AEO (Answer Engine Optimization) audit on a website. Returns a score 0-100, grade (A-F), breakdown by category (schema, meta, content, technical, AI signals), list of issues found, and prioritized recommendations to improve AI visibility. Use this when you need a comprehensive analysis of why a business isn't appearing in AI assistant answers.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the website to audit (e.g. 'https://example.com' or 'example.com')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With zero annotations provided, the description must disclose all behavioral traits. It excellently details return format (score 0-100, grade A-F, categories, issues, recommendations) but omits operational safety confirmation (read-only nature), execution time, rate limits, or crawl scope. Just meets minimum viable threshold for a data-retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two information-dense sentences with zero waste. First sentence front-loads purpose and return value structure; second provides usage context. Every clause earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter audit tool lacking output schema and annotations, the description adequately compensates by detailing return values and usage context. Missing explicit differentiation from named siblings and operational constraints (auth, limits) prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for its single parameter, documenting the URL format with examples. The description adds no parameter-specific details, but baseline 3 is appropriate when schema documentation is complete and minimal compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Run a full AEO (Answer Engine Optimization) audit on a website' provides exact verb, resource, and scope. Implicitly distinguishes from sibling 'get_aeo_score' (simple score retrieval) by emphasizing 'full' audit with detailed breakdowns, and from 'check_ai_readiness' by focusing on optimization rather than readiness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit usage trigger: 'Use this when you need a comprehensive analysis of why a business isn't appearing in AI assistant answers.' This clearly signals when to select this deep-audit tool over lighter sibling alternatives. Lacks explicit 'when not to use' or named alternatives that would warrant a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_ai_readinessAInspect

Check whether a website is properly configured for AI crawler access. Checks robots.txt for AI bot blocks, presence of llms.txt, schema markup, and other signals that affect whether ChatGPT, Claude, Perplexity and other AI assistants can read and cite the site. Returns a readiness summary with specific blockers.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the website to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and succeeds in explaining what gets checked (robots.txt blocks, llms.txt, schema markup) and what gets returned ('readiness summary with specific blockers'). However, it omits operational details like whether this is read-only, if it makes live HTTP requests, or potential rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. First sentence establishes high-level purpose. Second sentence enumerates specific technical checks (robots.txt, llms.txt, schema markup) and affected AI systems. Third sentence describes return value. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent completeness given the simple single-parameter input and lack of output schema. The description compensates by explicitly detailing the return value ('readiness summary with specific blockers') and enumerating checked signals. Only minor gap is lack of operational metadata (read-only hint, network behavior) which would normally appear in annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'url' parameter, which is well-described in the schema as 'The URL of the website to check'. The description references 'website' in the first sentence, aligning with the parameter, but does not add format constraints, examples, or protocol requirements beyond what the schema already provides. Baseline score appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity with specific verb+resource ('Check whether a website is properly configured for AI crawler access'). Distinguishes from siblings analyze_aeo/get_aeo_score by focusing specifically on crawler accessibility (robots.txt, llms.txt) rather than general content optimization or scoring. Lists specific AI assistants (ChatGPT, Claude, Perplexity) to scope the use case precisely.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through detailed technical scope (checking robots.txt AI blocks, llms.txt presence), but lacks explicit guidance on when to choose this over analyze_aeo or get_aeo_score. No 'when-not-to-use' or prerequisite conditions mentioned. Sibling differentiation relies on inference from the technical details provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_aeo_scoreAInspect

Get a quick AEO score for a website without the full breakdown. Returns the numeric score (0-100) and letter grade (A-F). Use this for a quick visibility check before deciding whether a full audit is needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the website to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It compensates by disclosing return structure (numeric score 0-100, letter grade A-F) since no output schema exists, and clarifies scope limitations ('without the full breakdown'). Does not mention rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences. First covers purpose and return values; second covers usage guidance. Zero waste, front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple single-parameter tool with no annotations and no output schema, the description is complete. It successfully compensates for missing output schema by detailing return format and differentiates from siblings, though could mention rate limiting or caching behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (single 'url' parameter fully described in schema). Description mentions 'for a website' which aligns with the parameter but adds no syntax/format details beyond the schema, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity with specific verb 'Get', resource 'AEO score', and clear scope 'quick...without the full breakdown'. The phrase 'quick visibility check' effectively distinguishes this from sibling 'analyze_aeo' (implied full audit).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('quick visibility check') and strongly implies when not to use ('before deciding whether a full audit is needed'), guiding users toward the appropriate depth of analysis without naming siblings explicitly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.