Skip to main content
Glama

Server Details

Scan any website for SEO, performance, accessibility, and AI search issues. Returns structured issues with fix prompts you can paste into Claude or Cursor to fix immediately. 40+ checks including Core Web Vitals, Open Graph, structured data, and AI search visibility.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: get_scan retrieves past results, get_site_intelligence provides domain analysis, next_issue prioritizes actionable tasks, and scan_website initiates new scans. The descriptions clearly differentiate their functions, eliminating any confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case: get_scan, get_site_intelligence, next_issue, and scan_website. This uniformity makes the toolset predictable and easy to understand at a glance.

Tool Count5/5

With 4 tools, this server is well-scoped for SEO analysis, covering key workflows like scanning, retrieving results, intelligence gathering, and task prioritization. Each tool earns its place without feeling excessive or insufficient for the domain.

Completeness4/5

The toolset provides strong coverage for core SEO workflows, including scanning, result retrieval, domain intelligence, and actionable insights. A minor gap exists in update or delete operations for scans, but agents can work around this given the server's focus on analysis and reporting.

Available Tools

4 tools
get_scanAInspect

Get the results of a previous SEOLint scan by its ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
scanIdYesThe scan ID (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

States the basic retrieval action but omits behavioral details like error handling for invalid IDs, caching, or result freshness given no annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately sized, front-loaded with action, and contains no redundancy for this simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple getter with one parameter and no output schema, though mentioning error cases (e.g., expired scans) would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Mentions 'by its ID' creating semantic link to scanId parameter, but with 100% schema coverage already explaining the UUID format, this meets the baseline without significant additional context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('results of a previous SEOLint scan'), with 'previous' implicitly distinguishing it from sibling scan_website, though explicit contrast is missing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies retrieval workflow via 'previous' but lacks explicit when-to-use guidance or contrast with scan_website alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_site_intelligenceAInspect

Get the full intelligence picture for a domain: site goal, ICP, sitemap structure and gaps, cross-page patterns (template issues affecting multiple pages), and scan coverage by page type. Call this at the start of any SEO session.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesThe domain, e.g. example.com
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It compensates by detailing the specific intelligence components returned (site goal, ICP, sitemap gaps, template issues, coverage by page type), effectively describing the output. However, it lacks operational details like safety profile, caching behavior, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. The first front-loads the action ('Get the full intelligence picture') followed by a colon-delimited list of specific outputs. The second sentence provides clear usage guidance. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description appropriately compensates by enumerating the five specific intelligence components returned. However, it could further improve by indicating the structure/format of the returned data (e.g., JSON structure, nested objects) since no output schema exists to document this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the 'domain' parameter clearly documented as 'The domain, e.g. example.com'. Per guidelines, with high schema coverage the baseline is 3. The description references 'for a domain' which aligns with the schema but does not add additional semantic context beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get') and resources ('intelligence picture') and explicitly lists the specific data returned (site goal, ICP, sitemap structure, cross-page patterns, scan coverage). It clearly distinguishes from sibling tools 'get_scan' and 'scan_website' by positioning this as an intelligence/overview tool rather than a scanning operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal guidance ('Call this at the start of any SEO session') indicating when to invoke the tool. However, it does not explicitly mention when NOT to use it or directly contrast with sibling alternatives like 'get_scan' for specific scan retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

next_issueAInspect

Get the single most actionable SEO todo for the user's site. Priority: recurrence-review > critical > warning > info; newest within a tier. Returns markdown that copy-pastes into a coding session as a task. When nothing is pending, returns the next scheduled scan time so the caller can tell the user to check back later. Requires an API key (get one at https://seolint.dev/api).

ParametersJSON Schema
NameRequiredDescriptionDefault
apiKeyYesYour SEOLint API key. Get it at https://seolint.dev/api.
domainNoOptional domain override, e.g. example.com. Falls back to the user's primary_domain.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the prioritization logic (recurrence-review > critical > warning > info; newest within a tier), output format (markdown for copy-pasting), fallback behavior (returns next scheduled scan time), and authentication requirement (API key). It does not mention rate limits, error handling, or response structure details, but covers essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by prioritization, output format, fallback behavior, and authentication in efficient sentences. Every sentence adds value without redundancy, making it highly concise and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does a good job covering the tool's behavior, output format, and authentication. It explains what is returned in both pending and non-pending cases. However, it lacks details on error scenarios or the exact structure of the markdown output, leaving some gaps for a tool with no structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (apiKey and domain). The description adds marginal value by reiterating the API key requirement and its source URL, but does not provide additional semantics beyond what the schema states. The baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the single most actionable SEO todo for the user's site.' It specifies the verb ('Get'), resource ('SEO todo'), and scope ('single most actionable'), distinguishing it from siblings like 'get_scan' (likely returns scan results) and 'scan_website' (likely initiates a scan). The prioritization logic further clarifies what 'most actionable' means.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to fetch the next SEO task, with fallback behavior when nothing is pending. However, it does not explicitly state when NOT to use it or name alternatives among sibling tools (e.g., 'get_scan' for raw scan data). The guidance is practical but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_websiteBInspect

Scan a website for SEO, performance, accessibility, and AI search issues. Returns structured issues with LLM-ready fix instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe full URL to scan, e.g. https://example.com
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses output format ('structured issues with LLM-ready fix instructions') compensating for missing output schema, but omits execution time, async behavior, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action, no redundancy—every clause delivers essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers functionality and return values for a single-parameter tool despite lacking formal output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds no parameter details, but schema has 100% coverage with clear descriptions, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action ('Scan') and target resource with four distinct issue categories, though lacks explicit differentiation from sibling 'get_scan'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus the sibling 'get_scan' tool or workflow sequence (initiate vs retrieve).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources