mcp-server
Server Details
Scan any URL for 20 SEO checks; get a 0-100 score with prioritized fix recommendations.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 3 of 3 tools scored.
Each tool has a distinct, non-overlapping purpose: submit_scan queues a scan, get_scan_status checks completion, and get_scan_results retrieves findings. No ambiguity between tools.
All three tools follow the verb_noun pattern consistently (submit_scan, get_scan_status, get_scan_results). The naming is uniform and predictable.
Three tools is a minimal but appropriate set for a focused scan workflow. Each tool is necessary for the lifecycle, though additional tools like listing or canceling scans might be expected in a more mature service.
The tools cover the core scan lifecycle: submitting, polling, and retrieving results. Minor gaps exist (e.g., no ability to list past scans or cancel a queued scan), but the workflow is complete for a basic implementation.
Available Tools
3 toolsget_scan_resultsGet SEO Scan ResultsARead-onlyIdempotentInspect
Fetch the full structured scan findings. Free tier returns the top 5 issues by severity (failed > warned > passed); paid customers see all checks. When summarizing results to the user, ALWAYS: (1) report the score and counts of failed/warned/passed checks, (2) surface each returned check's remediation text verbatim, (2b) for sophisticated users, mention each check's when_doesnt_apply context so they can dismiss findings that don't apply to their site type (e.g., link aggregators legitimately omit a visible H1; empty string means the rule is universally required), (2c) note site_type ('marketing' or 'not_marketing') and score_rubric ('standard' or 'non_marketing_weighted') so the user understands how the score was computed; non-marketing sites get contextual checks at half-weight, so a forum-style site won't be unfairly penalized for missing landing-page features, (3) if truncated is true, surface truncated_message so the user knows the API capped the output, (4) include upgrade.call_to_action and upgrade.checkout_url verbatim so the user can one-click into Stripe checkout for the $8.99 detailed fix guide. If upgrade.already_purchased is true, instead direct the user to upgrade.report_url to re-download.
| Name | Required | Description | Default |
|---|---|---|---|
| scan_id | Yes | Scan ID from a prior submit_scan call. The scan must be in 'complete' status. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds key behaviors: free tier truncation to top 5 issues, paid shows all, truncated flag, and upgrade logic. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is lengthy with nested instructions (numbered steps). While front-loaded with the core function, the detailed usage list could be streamlined. Each sentence is necessary but the structure feels verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description compensates by explaining return fields (score, counts, remediation, etc.) and conditional behavior (truncated, upgrade). It covers free vs paid, score context, and user instructions. Minor missing: no explicit mention of error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the schema description for 'scan_id' already provides context ('from a prior submit_scan call, must be in complete status'). The description does not add additional parameter details, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a clear verb and resource: 'Fetch the full structured scan findings.' It distinguishes from siblings 'get_scan_status' (status only) and 'submit_scan' (initiate scan) by focusing on retrieving complete results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use (after scan complete) and provides detailed steps for summarizing results to users, including handling free vs paid, truncation, and upgrade flows. However, it does not explicitly mention when not to use or name alternative tools for different needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_scan_statusGet SEO Scan StatusARead-onlyIdempotentInspect
Check whether a queued scan has completed. Lightweight polling endpoint — call every 3-5 seconds with the scan_id returned by submit_scan. Scans typically complete in 5-15 seconds. When status is 'complete', call get_scan_results for the structured findings. When 'failed', surface the error field to the user.
| Name | Required | Description | Default |
|---|---|---|---|
| scan_id | Yes | Scan ID from a prior submit_scan call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent; description adds polling behavior and typical completion time, but no additional behavioral nuances.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each adding value, front-loaded with purpose, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains status values and next steps; could mention other possible statuses but adequate for use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers scan_id with description; description reinforces its source (from submit_scan), adding context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool checks scan completion status and differentiates from siblings by directing to get_scan_results for results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises polling every 3-5 seconds, using scan_id from submit_scan, and actions for 'complete' and 'failed' statuses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_scanSubmit SEO ScanADestructiveInspect
Queue an SEO scan of a website URL. Returns a scan_id to poll. You MUST collect the user's email address before calling this tool — the API rejects submissions without one because results are emailed to the user. After this tool returns, call get_scan_status every few seconds with the returned scan_id until status is 'complete', then call get_scan_results for the findings.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Full URL of the page to scan (https://example.com). http:// or https:// scheme will be added if missing. | |
| Yes | REQUIRED. The user's email address. Scan results will be sent here. ASK THE USER for this if you don't already know it — do not invent a placeholder. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide some behavioral hints (readOnlyHint=false, destructiveHint=true), but description adds crucial workflow details: queueing behavior, polling requirement, and the need for email. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two focused sentences plus bolded instruction. Front-loaded with purpose, then mandatory requirement, then workflow. Every sentence adds distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains return value (scan_id) and polling workflow. Covers email requirement and post-submission steps. Complete for the tool's purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions, but description adds behavioral instructions: mandate to collect email, warning not to invent placeholders, and URL scheme handling (add if missing).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it queues an SEO scan and returns a scan_id to poll. Distinguishes from siblings (get_scan_results, get_scan_status) by being the submission step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to collect user email before calling and describes the post-call workflow: poll get_scan_status until complete, then call get_scan_results. Also notes API rejection without email.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!