Skip to main content
Glama
RichardDillman

SEO Audit MCP Server

run_audit

Perform a complete SEO audit by analyzing sitemaps, sampling pages, and generating prioritized recommendations with cached reports.

Instructions

FULL AUDIT - Run a complete SEO audit with automatic sampling, caching, and report generation.

This is the main audit tool that orchestrates the entire workflow:

  1. Discovers and analyzes sitemaps

  2. Identifies route patterns and creates sampling strategy

  3. Captures sample pages (cached - only fetches once)

  4. Analyzes SEO, structured data, technical issues, social graph

  5. Generates prioritized recommendations

  6. Saves everything to reports/[sitename]/ folder

The audit captures pages ONCE and stores:

  • HTML snapshots for inspection

  • Full analysis data as JSON

  • Final report as JSON + Markdown

Returns comprehensive findings and prioritized fix recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baseUrlYesThe base URL of the site to audit (e.g., https://talent.com)
reportsDirNoDirectory to save reports (default: ./reports)
maxSitemapsNoMaximum sitemaps to process (default: 15)
maxUrlsPerSitemapNoMaximum URLs per sitemap for pattern analysis (default: 2000)
samplesPerRouteTypeNoOverride samples per route type (default: auto based on route importance)
concurrencyNoConcurrent page captures (default: 2)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: caching ('captures pages ONCE'), output generation ('saves everything to reports/[sitename]/ folder'), and workflow steps. It could improve by mentioning potential side effects like network usage or time requirements, but it covers most critical aspects well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a bolded summary upfront and a numbered list detailing steps, making it easy to scan. It could be slightly more concise by reducing repetition (e.g., 'cached' mentioned multiple times), but overall, it's efficient and front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, no annotations), the description provides substantial context: it explains the workflow, output storage, and return values ('Returns comprehensive findings and prioritized fix recommendations'). It could enhance completeness by detailing error handling or performance implications, but it's largely sufficient for understanding the tool's scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no specific parameter information beyond what the schema provides, such as explaining how parameters interact or affect the audit process. This meets the baseline for high schema coverage but doesn't add extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run a complete SEO audit with automatic sampling, caching, and report generation.' It specifies the verb ('Run'), resource ('SEO audit'), and scope ('complete'), distinguishing it from siblings like analyze_page or crawl_site by emphasizing its comprehensive, orchestrated workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool by describing it as 'the main audit tool that orchestrates the entire workflow,' suggesting it's for full audits rather than partial analyses. However, it lacks explicit guidance on when to use alternatives like plan_audit or sample_pages, which could help differentiate usage scenarios more clearly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RichardDillman/seo-audit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server