seo-mcp
Server Details
Six tools for SEO and AI-readability audits. 91 checks, 11 score modules.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- MetricSpot/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 6 of 6 tools scored.
Each tool serves a distinct purpose: running audits (authenticated vs anonymous), listing, retrieving results, getting PDF, and fetching traffic data. No overlap.
All tools follow a consistent verb_noun pattern in snake_case (e.g., run_audit, get_audit_pdf), making it predictable.
Six tools cover the core SEO auditing workflow without being excessive or insufficient for the domain.
The toolset covers the full lifecycle: initiate audit (with auth options), list past audits, retrieve detailed results, download PDF reports, and get organic traffic data. No obvious gaps.
Available Tools
6 toolsget_auditAInspect
Fetch a previously-run audit by id. Returns module scores (0-100), total score, all findings with severity, recommendation text, and links to the HTML report. Use this to poll a queued run_audit until status: complete. Requires an API key.
| Name | Required | Description | Default |
|---|---|---|---|
| audit_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description carries full burden. It discloses return data and polling use case. Implies read-only by stating 'previously-run audit'. Adequate transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose and returns, second gives usage context. Front-loaded and concise, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool (one param, no output schema), description lists return fields and polling usage. Fully adequate for agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter (audit_id) with 0% schema description coverage. Description adds context that it's a previously-run audit id, but no format or source details. Baseline for low coverage but description adds some value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches a previously-run audit by id, and lists specific return data (module scores, total score, findings, recommendation, links). It distinguishes itself from siblings like run_audit and get_audit_pdf.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this to poll a queued `run_audit` until `status: complete`.' Also mentions API key requirement. Could add when not to use (e.g., use list_audits to find audits).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_audit_pdfAInspect
Return a signed download URL for the branded PDF report for a given audit id. If no PDF has been rendered yet, queues a render and returns status: queued — poll the same tool again, or fetch the URL directly once ready. Requires an API key.
| Name | Required | Description | Default |
|---|---|---|---|
| audit_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses queuing behavior, status response, and API key requirement. Could add more about URL expiration or rate limits but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences cover purpose, behavior, and requirement. No wasted words, front-loaded with main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, description adequately explains both response types (URL or queued). Could mention signed URL expiration, but not critical for usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% coverage, but description clearly states audit_id is the identifier and explains its role. Single parameter makes it unambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns a signed download URL for a PDF report for a given audit ID, and distinguishes from siblings that likely return data or lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly explains when to use (to get PDF), what happens if not ready (poll or fetch URL), and requires API key. No alternatives mentioned but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_organic_trafficAInspect
If the user has linked GA4 + Google Search Console, return the 28-day organic traffic snapshot for an audit: session count, daily trend, top landing pages, top queries, and indexing health. Returns connected: false if Google is not linked. Cached 24h server-side. Requires an API key.
| Name | Required | Description | Default |
|---|---|---|---|
| audit_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description discloses important behavioral traits: returns a failure indicator if Google is not linked, cached for 24 hours server-side, and requires an API key. It implies read-only operation ('snapshot'). This is good transparency for a tool without annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: first sentence states the main purpose and outputs, second adds failure case, caching, and authentication. Every sentence adds value, and it is front-loaded with the key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter, no output schema, and no annotations, the description covers return content, failure mode, caching, and authentication. It does not describe the output format (e.g., JSON structure), but the listed data points give sufficient understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has one parameter `audit_id` with 0% description coverage. The description mentions 'for an audit,' which implies the parameter identifies the audit, but does not explain where to obtain it or its format (e.g., from `list_audits`). It adds some meaning but not full semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's action: returning a 28-day organic traffic snapshot for an audit. It lists specific data points (session count, daily trend, etc.) and a failure case, distinguishing it from sibling tools like `get_audit` and `list_audits`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies the prerequisite condition ('If the user has linked GA4 + Google Search Console') and indicates when the tool returns a failure indicator (`connected: false`). It does not explicitly state when not to use it or mention alternative tools, but the context is clear enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_auditsAInspect
List the user's audits (most recent first, deduplicated by URL). Returns audit_id, url, status, total_score, created_at. Default limit 24, max 100. Use the returned audit_id with get_audit for full findings. Requires an API key.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses key behaviors: ordering, deduplication, default/max limit, returned fields, and auth requirement. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose/features, return fields, next step with sibling, auth requirement. No filler, front-loaded with critical info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers return fields, parameter limit, ordering, deduplication, and auth. Complete for a list tool with one optional parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, but the description adds essential info: 'Default limit 24, max 100', which is not in the schema itself. Fully compensates for the gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists user audits with specific ordering ('most recent first') and deduplication by URL, distinguishing it from siblings like get_audit (single audit) or run_audit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions 'Requires an API key' and advises using the returned audit_id with get_audit for full findings. Lacks explicit when-not-to-use guidance but provides clear context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_auditAInspect
Queue a full SEO + AI-readability audit (includes Core Web Vitals from Google PSI and organic traffic if Google is linked). Returns the audit envelope immediately with status: queued and an audit_id. Poll get_audit with the returned audit_id until status becomes complete (typical 10-30s). Counts against the user's plan allowance. Requires an API key as a Bearer token. Quota and per-domain cooldowns mirror the dashboard.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully covers behavior: async return with queued status, polling requirement, typical 10-30s time, plan allowance counting, and API key auth. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and flows logically through async process. It is efficient but could combine sentences slightly for tighter reading; still well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter, no output schema, and no annotations, the description covers the full async flow, response shape, polling guidance, quota limits, and auth requirements. Fully sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, and the description adds no specific parameter details beyond the implicit 'url' context. It doesn't clarify validation, format expectations, or error handling for invalid URLs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool queues an SEO + AI-readability audit, specifying included components (Core Web Vitals, organic traffic). It distinguishes from siblings like get_audit (polling) and run_audit_anonymous (variant).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use this tool (to start an audit) and mentions polling get_audit for results. It misses explicit contraindications or alternatives like run_audit_anonymous, but the context of async usage is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_audit_anonymousAInspect
Run a one-shot SEO + AI-readability audit on any public URL. Returns scores across 11 modules and ~90 checks, plus actionable findings with rule docs. Limited to 1 audit per IP per 24 hours — for higher volume, get an API key at https://app.metricspot.com/settings/api-keys and use run_audit. Synchronous: blocks until the audit completes. Does NOT include Core Web Vitals (use run_audit for full PSI scoring). No auth required.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses synchronous blocking behavior, rate limit (1 per IP per 24h), absence of Core Web Vitals, and that no auth is required. Full transparency for a simple tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise, front-loaded paragraph of four sentences with no redundancy. Every sentence adds critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Completely describes what the tool does, its limitations, alternatives, and output format (11 modules, ~90 checks, rule docs). Adequate despite missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single `url` parameter has no schema description (0% coverage), but the description clarifies it must be a public URL. Adds value beyond the schema's `format` and `maxLength`.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states it runs a one-shot SEO+AI-readability audit on a public URL, clearly differentiating from the sibling `run_audit` tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit context on when to use this tool vs. alternatives, including rate limits, API key requirements, and the exclusion of Core Web Vitals.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!