automatelab-ai-seo
OfficialServer Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| MAX_BYTES | No | Maximum response body size in bytes (5 MB). | 5242880 |
| USER_AGENT | No | HTTP User-Agent on all fetches. | automatelab-ai-seo-mcp/0.1.0 (+https://github.com/AutomateLab-tech/ai-seo) |
| RESPECT_ROBOTS | No | Global default for robots.txt compliance. Set "false" to disable. | true |
| FETCH_TIMEOUT_MS | No | Per-request timeout in milliseconds. | 15000 |
| INTER_REQUEST_DELAY_MS | No | Minimum delay between requests to the same host within a tool call. | 1500 |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| audit_pageA | Full AI-SEO audit of a single URL: returns categorized findings (info/warning/error) with severity, fix instructions, and a 0-100 composite score plus per-dimension subscores. Read-only. Fetches the URL once and runs every sub-audit (schema, robots, technical, sitemap, AI-Overview eligibility) against the response. No writes, no third-party APIs, no auth required, no rate limits beyond polite per-host throttling. Deterministic, rule-based scoring; no LLM calls. Same URL + same input flags returns the same score. When to use: the default entry point for |
| audit_schemaA | Validate JSON-LD structured data against Schema.org rules and AI-citation best practices. Accepts either a URL (fetched) or a raw JSON string (parsed directly). Read-only when given Deterministic, rule-based; no LLM. Validates required/recommended properties, @context correctness, sameAs links, and AI-search-friendly patterns. When to use: focused JSON-LD audits, or to validate a schema block you're about to ship. For a full page audit that includes schema + everything else, use Either |
| audit_canonicalA | Audit a page's canonical link integrity: presence, self-reference, cross-domain mismatches, trailing-slash hygiene, and og:url consistency. Read-only. One HTTP GET to fetch the HEAD section. Deterministic, rule-based; no LLM. When to use: a focused canonical-only audit (e.g. debugging a duplicate-content issue). For a full HEAD audit including OpenGraph, hreflang, noindex, title, use |
| check_robotsA | Fetch and parse a domain's robots.txt; report per-crawler allow/disallow posture for every known AI training crawler (GPTBot, CCBot, Anthropic-AI, Google-Extended, etc.), AI search crawlers (ChatGPT-User, PerplexityBot, OAI-SearchBot), and user-triggered fetchers. Read-only. One HTTP GET to /robots.txt. No auth, no rate limits applied. Deterministic, rule-based; no LLM. Returns structured findings with per-crawler status. When to use: figuring out which AI crawlers a site blocks vs allows. Combine with |
| check_sitemapA | Validate a domain's XML sitemap: presence, accessibility, URL count, lastmod freshness, sitemap-index handling, and image/video sitemap extensions. Read-only. Issues N+1 HTTP GETs: one for robots.txt + sitemap, then up to Deterministic, rule-based; no LLM. When to use: site-wide indexing audits. Pair with |
| check_technicalA | Audit a page's HEAD section for technical signals relevant to AI crawlers: HTTPS, canonical, OpenGraph, Twitter Card, hreflang, noindex, and title-vs-H1 hygiene. Read-only. One HTTP GET, inspects HEAD only (body is not parsed). Deterministic, rule-based; no LLM. When to use: when you specifically need HEAD-tag audit findings. For the full page including schema and AI-Overview scoring, use |
| score_ai_overview_eligibilityA | Score a page's probability of appearing in Google AI Overviews. Returns an overall 0-100 score plus six factor subscores: semantic completeness, structured data, E-E-A-T signals, entity density, freshness, and technical hygiene. Read-only. One HTTP GET. Deterministic, rule-based scoring derived from published 2025-2026 AI-Overview correlation studies. No LLM calls. Same URL returns the same score on repeated runs. When to use: AI-Overview-specific prioritization. For a multi-dimensional audit that includes this scoring plus everything else, use |
| generate_llms_txtA | Generate a spec-compliant llms.txt (and optionally llms-full.txt) for a domain by reading its sitemap, sampling up to Read-only. Issues one HTTP GET for the sitemap then one per sampled page. Deterministic; no LLM. Output is the file content as a string - this tool does NOT write to disk or upload anywhere. The caller is responsible for hosting the resulting file at When to use: bootstrapping llms.txt for a site you own. To check an existing llms.txt, use |
| validate_llms_txtA | Validate an existing llms.txt or llms-full.txt against the spec: structure, section ordering, link format, and (optionally) broken-link detection. Read-only. One HTTP GET when given Deterministic; no LLM. When to use: auditing an llms.txt you already have. To generate one from scratch, use Either |
| score_citation_worthinessA | Score how citable a page or text block is for AI engines (ChatGPT, Claude, Perplexity, Google AI Overviews). Evaluates BLUF (bottom-line-up-front) opening, FAQ patterns, statistic density, entity clarity, and answer-shape fit for the optional Read-only when given Deterministic, rule-based; no LLM calls. Returns reproducible scores. When to use: pre-publish content QA, or to triage which existing pages are worth optimizing for AI citation first. Distinct from Either |
| rewrite_for_aeoA | Rewrite a content block for Answer Engine Optimization. Adds a BLUF opening, FAQ structure, schema additions, and concise question-shaped headings tuned for ChatGPT / Perplexity / Google AI Overviews. Read-only when given This tool delegates the actual rewrite to the calling LLM via MCP sampling - it does not call any external API itself. The MCP host's model produces the rewrite. Same input may produce different output across runs (model-dependent). When to use: optimizing content for direct-answer surfaces (definitions, how-tos, FAQs). For Generative Engine Optimization (entity-rich, comparison-ready synthesis), use Either |
| rewrite_for_geoA | Rewrite a content block for Generative Engine Optimization: entity-rich, comparison-ready, synthesis-friendly. Tuned for surfaces that summarize across sources (Perplexity, Google AI Mode, Claude search). Read-only on input. Does NOT write back to the source URL - returns the rewritten content as a string. This tool delegates the actual rewrite to the calling LLM via MCP sampling - it does not call any external API itself. The MCP host's model produces the rewrite. Output may vary across runs (model-dependent). When to use: optimizing for synthesis-style answers across multiple sources. For direct-answer (BLUF + FAQ) optimization on a single page, use Either |
| extract_entitiesA | Extract named entities, linked concepts, and sameAs graph nodes from a page's content and structured data. Combines body-text NER heuristics with JSON-LD Read-only when given Deterministic, rule-based; no LLM. Output is a list of entities with type, confidence, and any sameAs URIs found in structured data. When to use: building an entity map for schema generation, or auditing whether a page's entities match its target topic. To validate the JSON-LD itself, use Either |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/AutomateLab-tech/ai-seo'
If you have feedback or need assistance with the MCP directory API, please join our Discord server