Skip to main content
Glama
AutomateLab-tech

automatelab-ai-seo

Official

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
MAX_BYTESNoMaximum response body size in bytes (5 MB).5242880
USER_AGENTNoHTTP User-Agent on all fetches.automatelab-ai-seo-mcp/0.1.0 (+https://github.com/AutomateLab-tech/ai-seo)
RESPECT_ROBOTSNoGlobal default for robots.txt compliance. Set "false" to disable.true
FETCH_TIMEOUT_MSNoPer-request timeout in milliseconds.15000
INTER_REQUEST_DELAY_MSNoMinimum delay between requests to the same host within a tool call.1500

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
audit_pageA

Full AI-SEO audit of a single URL: returns categorized findings (info/warning/error) with severity, fix instructions, and a 0-100 composite score plus per-dimension subscores.

Read-only. Fetches the URL once and runs every sub-audit (schema, robots, technical, sitemap, AI-Overview eligibility) against the response. No writes, no third-party APIs, no auth required, no rate limits beyond polite per-host throttling.

Deterministic, rule-based scoring; no LLM calls. Same URL + same input flags returns the same score.

When to use: the default entry point for audit any page. Use this instead of calling check_technical / audit_schema / check_robots / check_sitemap / score_ai_overview_eligibility individually unless you specifically need only one dimension - this tool composes all of them.

audit_schemaA

Validate JSON-LD structured data against Schema.org rules and AI-citation best practices. Accepts either a URL (fetched) or a raw JSON string (parsed directly).

Read-only when given url (one HTTP GET). Zero network when given schema_json. No writes.

Deterministic, rule-based; no LLM. Validates required/recommended properties, @context correctness, sameAs links, and AI-search-friendly patterns.

When to use: focused JSON-LD audits, or to validate a schema block you're about to ship. For a full page audit that includes schema + everything else, use audit_page instead.

Either url or schema_json must be provided (not both). If both are provided, schema_json wins and no fetch happens.

audit_canonicalA

Audit a page's canonical link integrity: presence, self-reference, cross-domain mismatches, trailing-slash hygiene, and og:url consistency.

Read-only. One HTTP GET to fetch the HEAD section.

Deterministic, rule-based; no LLM.

When to use: a focused canonical-only audit (e.g. debugging a duplicate-content issue). For a full HEAD audit including OpenGraph, hreflang, noindex, title, use check_technical. For everything-on-a-page, use audit_page.

check_robotsA

Fetch and parse a domain's robots.txt; report per-crawler allow/disallow posture for every known AI training crawler (GPTBot, CCBot, Anthropic-AI, Google-Extended, etc.), AI search crawlers (ChatGPT-User, PerplexityBot, OAI-SearchBot), and user-triggered fetchers.

Read-only. One HTTP GET to /robots.txt. No auth, no rate limits applied.

Deterministic, rule-based; no LLM. Returns structured findings with per-crawler status.

When to use: figuring out which AI crawlers a site blocks vs allows. Combine with check_sitemap for a full pre-crawl audit. Distinct from audit_page which evaluates a single URL; this evaluates a whole-domain policy.

check_sitemapA

Validate a domain's XML sitemap: presence, accessibility, URL count, lastmod freshness, sitemap-index handling, and image/video sitemap extensions.

Read-only. Issues N+1 HTTP GETs: one for robots.txt + sitemap, then up to max_urls_to_check HEADs against sampled URLs.

Deterministic, rule-based; no LLM.

When to use: site-wide indexing audits. Pair with check_robots for a full pre-crawl picture. For per-page checks, use audit_page or check_technical instead.

check_technicalA

Audit a page's HEAD section for technical signals relevant to AI crawlers: HTTPS, canonical, OpenGraph, Twitter Card, hreflang, noindex, and title-vs-H1 hygiene.

Read-only. One HTTP GET, inspects HEAD only (body is not parsed).

Deterministic, rule-based; no LLM.

When to use: when you specifically need HEAD-tag audit findings. For the full page including schema and AI-Overview scoring, use audit_page. For canonical-only, use audit_canonical.

score_ai_overview_eligibilityA

Score a page's probability of appearing in Google AI Overviews. Returns an overall 0-100 score plus six factor subscores: semantic completeness, structured data, E-E-A-T signals, entity density, freshness, and technical hygiene.

Read-only. One HTTP GET.

Deterministic, rule-based scoring derived from published 2025-2026 AI-Overview correlation studies. No LLM calls. Same URL returns the same score on repeated runs.

When to use: AI-Overview-specific prioritization. For a multi-dimensional audit that includes this scoring plus everything else, use audit_page. For citation-worthiness of a specific text passage (rather than a URL ranking probability), use score_citation_worthiness.

generate_llms_txtA

Generate a spec-compliant llms.txt (and optionally llms-full.txt) for a domain by reading its sitemap, sampling up to max_pages pages, and synthesizing a grouped, sectioned summary.

Read-only. Issues one HTTP GET for the sitemap then one per sampled page.

Deterministic; no LLM. Output is the file content as a string - this tool does NOT write to disk or upload anywhere. The caller is responsible for hosting the resulting file at https://<domain>/llms.txt.

When to use: bootstrapping llms.txt for a site you own. To check an existing llms.txt, use validate_llms_txt instead.

validate_llms_txtA

Validate an existing llms.txt or llms-full.txt against the spec: structure, section ordering, link format, and (optionally) broken-link detection.

Read-only. One HTTP GET when given url; zero network when given content. Optional link-check issues HEAD requests against each link if check_links is true.

Deterministic; no LLM.

When to use: auditing an llms.txt you already have. To generate one from scratch, use generate_llms_txt.

Either url or content must be provided.

score_citation_worthinessA

Score how citable a page or text block is for AI engines (ChatGPT, Claude, Perplexity, Google AI Overviews). Evaluates BLUF (bottom-line-up-front) opening, FAQ patterns, statistic density, entity clarity, and answer-shape fit for the optional target_query.

Read-only when given url (one HTTP GET). Zero network when given text. No writes.

Deterministic, rule-based; no LLM calls. Returns reproducible scores.

When to use: pre-publish content QA, or to triage which existing pages are worth optimizing for AI citation first. Distinct from score_ai_overview_eligibility which scores Google-AI-Overview ranking probability for a URL; this scores the inherent citability of a text passage regardless of host.

Either url or text must be provided.

rewrite_for_aeoA

Rewrite a content block for Answer Engine Optimization. Adds a BLUF opening, FAQ structure, schema additions, and concise question-shaped headings tuned for ChatGPT / Perplexity / Google AI Overviews.

Read-only when given url (one HTTP GET). Zero network when given text. The tool does NOT write back to the URL - it only returns the rewritten content as a string. No side effects on the source.

This tool delegates the actual rewrite to the calling LLM via MCP sampling - it does not call any external API itself. The MCP host's model produces the rewrite. Same input may produce different output across runs (model-dependent).

When to use: optimizing content for direct-answer surfaces (definitions, how-tos, FAQs). For Generative Engine Optimization (entity-rich, comparison-ready synthesis), use rewrite_for_geo instead.

Either url or text must be provided. target_query is required.

rewrite_for_geoA

Rewrite a content block for Generative Engine Optimization: entity-rich, comparison-ready, synthesis-friendly. Tuned for surfaces that summarize across sources (Perplexity, Google AI Mode, Claude search).

Read-only on input. Does NOT write back to the source URL - returns the rewritten content as a string.

This tool delegates the actual rewrite to the calling LLM via MCP sampling - it does not call any external API itself. The MCP host's model produces the rewrite. Output may vary across runs (model-dependent).

When to use: optimizing for synthesis-style answers across multiple sources. For direct-answer (BLUF + FAQ) optimization on a single page, use rewrite_for_aeo instead.

Either url or text must be provided. target_query is required.

extract_entitiesA

Extract named entities, linked concepts, and sameAs graph nodes from a page's content and structured data. Combines body-text NER heuristics with JSON-LD @type / sameAs walking.

Read-only when given url (one HTTP GET). Zero network when given text.

Deterministic, rule-based; no LLM. Output is a list of entities with type, confidence, and any sameAs URIs found in structured data.

When to use: building an entity map for schema generation, or auditing whether a page's entities match its target topic. To validate the JSON-LD itself, use audit_schema.

Either url or text must be provided.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AutomateLab-tech/ai-seo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server