Skip to main content
Glama
Ownership verified

Server Details

Scan any website for SEO, performance, accessibility, and AI search issues. Returns structured issues with fix prompts you can paste into Claude or Cursor to fix immediately. 40+ checks including Core Web Vitals, Open Graph, structured data, and AI search visibility.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsA

Average 3.5/5 across 2 of 2 tools scored.

Server CoherenceA
Disambiguation5/5

The tools have perfectly distinct purposes: `scan_website` initiates a new scan while `get_scan` retrieves existing results by ID. No functional overlap exists.

Naming Consistency5/5

Both tools follow consistent snake_case formatting and clear verb_noun patterns (`scan_website`, `get_scan`). The naming convention is predictable and readable.

Tool Count3/5

With only 2 tools, the set is borderline thin for an SEO linting service. While it covers the basic scan-and-retrieve workflow, it lacks supporting operations (list, delete, configure) that would round out the surface.

Completeness3/5

The surface covers the minimal lifecycle (create scan, get results) but has notable gaps. There is no way to list previous scans to discover IDs, cancel running scans, or manage scan history, which limits agent utility for ongoing monitoring.

Available Tools

3 tools
get_scanAInspect

Get the results of a previous SEOLint scan by its ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
scanIdYesThe scan ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

States the basic retrieval action but omits behavioral details like error handling for invalid IDs, caching, or result freshness given no annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately sized, front-loaded with action, and contains no redundancy for this simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple getter with one parameter and no output schema, though mentioning error cases (e.g., expired scans) would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Mentions 'by its ID' creating semantic link to scanId parameter, but with 100% schema coverage already explaining the UUID format, this meets the baseline without significant additional context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('results of a previous SEOLint scan'), with 'previous' implicitly distinguishing it from sibling scan_website, though explicit contrast is missing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies retrieval workflow via 'previous' but lacks explicit when-to-use guidance or contrast with scan_website alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_site_intelligenceAInspect

Get the full intelligence picture for a domain: site goal, ICP, sitemap structure and gaps, cross-page patterns (template issues affecting multiple pages), and scan coverage by page type. Call this at the start of any SEO session.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesThe domain, e.g. example.com
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It compensates by detailing the specific intelligence components returned (site goal, ICP, sitemap gaps, template issues, coverage by page type), effectively describing the output. However, it lacks operational details like safety profile, caching behavior, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. The first front-loads the action ('Get the full intelligence picture') followed by a colon-delimited list of specific outputs. The second sentence provides clear usage guidance. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description appropriately compensates by enumerating the five specific intelligence components returned. However, it could further improve by indicating the structure/format of the returned data (e.g., JSON structure, nested objects) since no output schema exists to document this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the 'domain' parameter clearly documented as 'The domain, e.g. example.com'. Per guidelines, with high schema coverage the baseline is 3. The description references 'for a domain' which aligns with the schema but does not add additional semantic context beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get') and resources ('intelligence picture') and explicitly lists the specific data returned (site goal, ICP, sitemap structure, cross-page patterns, scan coverage). It clearly distinguishes from sibling tools 'get_scan' and 'scan_website' by positioning this as an intelligence/overview tool rather than a scanning operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal guidance ('Call this at the start of any SEO session') indicating when to invoke the tool. However, it does not explicitly mention when NOT to use it or directly contrast with sibling alternatives like 'get_scan' for specific scan retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_websiteBInspect

Scan a website for SEO, performance, accessibility, and AI search issues. Returns structured issues with LLM-ready fix instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe full URL to scan, e.g. https://example.com
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses output format ('structured issues with LLM-ready fix instructions') compensating for missing output schema, but omits execution time, async behavior, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action, no redundancy—every clause delivers essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers functionality and return values for a single-parameter tool despite lacking formal output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds no parameter details, but schema has 100% coverage with clear descriptions, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action ('Scan') and target resource with four distinct issue categories, though lacks explicit differentiation from sibling 'get_scan'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus the sibling 'get_scan' tool or workflow sequence (initiate vs retrieve).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.