seo-web-analysis-mcp-server
Server Details
Site crawl + tech stack + DNS + SSL + WHOIS — five web-intel layers in one MCP.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored.
Each tool targets a distinct aspect of web analysis: DNS, SSL, crawling, tech stack, and WHOIS. There is no functional overlap.
All tool names follow a consistent verb_noun pattern (check_dns, check_ssl, crawl_website, detect_tech_stack, lookup_whois).
5 tools is well-scoped for a web analysis server, covering essential checks without redundancy or unnecessary complexity.
Covers core web analysis domains well, but lacks checks for robots.txt, sitemaps, or page speed, which are common in SEO audits.
Available Tools
5 toolscheck_dnsARead-onlyInspect
Perform DNS lookup to retrieve DNS record details for a domain. Returns A records (IP addresses), MX records (mail servers), CNAME records, NS records (nameservers), and TXT records. Use for email setup verification, DNS troubleshooting, or server infrastructure research.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain name to look up (e.g. 'google.com', 'example.org', 'subdomain.example.com') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, so description adds value by detailing exact record types returned. No destructive behavior implied. Consistent with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, each essential. Front-loaded with verb-resource, then records, then use cases. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Single parameter fully described by schema, annotations cover safety, and description lists return types. With no output schema, this provides sufficient information for agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with description for 'domain' parameter. Description does not add extra meaning beyond schema; baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Perform DNS lookup' and lists specific record types (A, MX, CNAME, NS, TXT), distinguishing it from sibling tools like check_ssl and crawl_website.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states use cases: 'email setup verification, DNS troubleshooting, or server infrastructure research.' No mention of when not to use, but context with siblings implies alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_sslARead-onlyInspect
Inspect SSL/TLS certificate details for a domain. Returns certificate issuer, expiration date, subject alternative names (SANs), key strength, and certificate chain validation status. Use for security audits, certificate renewal tracking, or compliance verification.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain to check SSL certificate (e.g. 'example.com', 'api.example.com') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds behavioral details beyond annotations (readOnlyHint, openWorldHint) by listing the return values like issuer, expiration, SANs, etc. Annotations already indicate safety; description enriches understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: first defines the core action and outputs, second lists use cases. Every word adds value; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple single-parameter tool without output schema, the description thoroughly covers inputs, outputs, and use cases. No gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a well-described domain parameter. Description does not add additional semantics beyond what the parameter schema provides, so baseline score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Inspect SSL/TLS certificate details for a domain' and lists specific certificate attributes returned. It clearly distinguishes from sibling tools like check_dns by focusing on SSL/TLS.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage contexts such as 'security audits, certificate renewal tracking, or compliance verification,' guiding the agent on when to use. It does not explicitly state when not to use but the context is clear given siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crawl_websiteARead-onlyInspect
Crawl a website and extract structured content from all accessible pages. Returns page titles, meta descriptions, headings, body text, internal/external links, and page structure. Use for SEO audits, content inventory, site mapping, or data extraction for analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Website URL to crawl (e.g. 'https://www.example.com', 'example.com') | |
| max_pages | No | Maximum number of pages to crawl (default 10, higher for full site scans) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true and openWorldHint=true, so description carries lower burden. Description adds return details but does not disclose crawl limitations (e.g., default 10 pages, same-domain scope, robots.txt behavior) which could mislead. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first explains action and output, second lists use cases. Front-loaded with verb and structured content. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description covers purpose, output, and use cases adequately given no output schema and good annotations. But misses details on output format (e.g., each page as object), limitations, and behavior beyond 'all accessible pages' – the max_pages default is unmentioned. Could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both params. The description adds no extra parameter-level info beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description starts with explicit verb 'crawl a website and extract structured content', specifies return values (titles, meta, etc.), and lists use cases (SEO audits, etc.). Clearly distinguishes from siblings like check_dns or check_ssl which are specific checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description states 'Use for SEO audits, content inventory, site mapping, or data extraction', giving clear when-to-use context. However, it does not explicitly mention when not to use or compare to siblings, but the distinct purpose implies exclusivity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
detect_tech_stackARead-onlyInspect
Identify the technology stack and services used by a website. Returns framework names, CMS platform, JavaScript libraries, analytics services, CDN provider, hosting provider, and security tools detected. Use for competitive analysis, vendor intelligence, or understanding site architecture.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Website URL to analyze (e.g. 'https://www.example.com') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint and openWorldHint. The description adds detail on return categories but lacks disclosure on latency, accessibility requirements, or caching. Adequate but not enhanced beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise, only three sentences. Each sentence adds value: the first states purpose, the second lists returns, the third suggests usage. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains what the tool returns, which is key given no output schema. Could include behavioral notes like site accessibility requirements, but overall sufficiently complete for a read-only tool with annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description does not add extra meaning to the 'url' parameter beyond the schema's own description. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it identifies technology stacks and services, listing specific categories (frameworks, CMS, libraries, etc.). It also distinguishes from sibling tools (DNS, SSL, crawl, WHOIS) by being uniquely focused on tech detection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly mentions use cases like competitive analysis and vendor intelligence. It does not provide when-not-to-use or alternatives, but siblings are sufficiently different to avoid confusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_whoisARead-onlyInspect
Query WHOIS database for domain registration details. Returns registrant name, registrar, registration and expiration dates, registrant contact info, and nameserver list. Use for domain research, owner identification, or tracking registration status.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain name to look up in WHOIS (e.g. 'example.com', 'company.org') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint. The description adds no further behavioral details beyond listing return fields. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states action and result fields, second lists use cases. No unnecessary words, effectively front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description sufficiently covers return values (registrant name, registrar, dates, contact info, nameservers). For a simple one-parameter lookup, this is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the parameter is fully documented in the schema. The description does not add new semantic information about the parameter beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it queries the WHOIS database for domain registration details, listing specific return fields. This clearly distinguishes it from sibling tools like check_dns, check_ssl, crawl_website, and detect_tech_stack.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases: 'domain research, owner identification, or tracking registration status.' While it doesn't contrast with siblings, it gives clear context for when to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!