Skip to main content
Glama
RichardDillman

SEO Audit MCP Server

plan_audit

Analyze sitemaps to create intelligent sampling strategies for large websites, identifying route patterns and recommending pages for SEO audit analysis.

Instructions

RECOMMENDED FIRST STEP - Analyze sitemaps and create an intelligent sampling strategy for large sites.

This tool is essential for job boards and large sites with 100k+ pages. Instead of crawling everything, it:

  1. Discovers and validates all sitemaps (robots.txt + common locations)

  2. Identifies distinct route patterns (job pages, category pages, location pages, etc.)

  3. Estimates total pages per route type

  4. Generates a smart sampling strategy

  5. Recommends which pages to analyze with Lighthouse

Returns:

  • Sitemap validation (URL limits, lastmod coverage, compression)

  • Route pattern classification with estimated counts

  • Sampling strategy (how many pages to sample per type)

  • Issues, warnings, and recommendations

Use this BEFORE crawl_site or sample_pages to understand site structure.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baseUrlYesThe base URL of the site (e.g., https://talent.com)
maxSitemapsToProcessNoMaximum sitemaps to analyze (default: 20)
maxUrlsPerSitemapNoMaximum URLs to process per sitemap for pattern analysis (default: 5000)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by describing the multi-step process (discover sitemaps, identify patterns, estimate counts, generate strategy). It mentions what the tool returns (sitemap validation, route classification, sampling strategy, issues/warnings) and its non-destructive nature (analysis only). However, it doesn't specify performance characteristics like rate limits or timeout behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, process steps, returns, usage guidance) and front-loads the most important information ('RECOMMENDED FIRST STEP'). While comprehensive, it could be slightly more concise by combining some of the bullet points about returns into a single sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, and no output schema, the description provides excellent context about what the tool does, when to use it, and what it returns. It explains the multi-step analysis process and connects to sibling tools. The main gap is the lack of output schema, but the description compensates by listing return categories.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters adequately. The description doesn't add any additional parameter semantics beyond what's in the schema - it focuses on the tool's purpose and outputs rather than explaining parameter usage. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('analyze sitemaps', 'create sampling strategy') and resources ('large sites', 'job boards'). It explicitly distinguishes from siblings by recommending use 'BEFORE crawl_site or sample_pages' and mentions alternative tools like Lighthouse analysis, showing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('RECOMMENDED FIRST STEP', 'essential for job boards and large sites with 100k+ pages', 'BEFORE crawl_site or sample_pages') and when not to use it (for small sites or direct analysis). It names specific alternative tools (crawl_site, sample_pages, Lighthouse) for different scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RichardDillman/seo-audit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server