Skip to main content
Glama

Server Details

Site crawl + tech stack + DNS + SSL + WHOIS — five web-intel layers in one MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of web analysis: DNS, SSL, crawling, tech stack, and WHOIS. There is no functional overlap.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (check_dns, check_ssl, crawl_website, detect_tech_stack, lookup_whois).

Tool Count5/5

5 tools is well-scoped for a web analysis server, covering essential checks without redundancy or unnecessary complexity.

Completeness4/5

Covers core web analysis domains well, but lacks checks for robots.txt, sitemaps, or page speed, which are common in SEO audits.

Available Tools

5 tools
check_dnsA
Read-only
Inspect

Perform DNS lookup to retrieve DNS record details for a domain. Returns A records (IP addresses), MX records (mail servers), CNAME records, NS records (nameservers), and TXT records. Use for email setup verification, DNS troubleshooting, or server infrastructure research.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to look up (e.g. 'google.com', 'example.org', 'subdomain.example.com')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so description adds value by detailing exact record types returned. No destructive behavior implied. Consistent with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, each essential. Front-loaded with verb-resource, then records, then use cases. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Single parameter fully described by schema, annotations cover safety, and description lists return types. With no output schema, this provides sufficient information for agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with description for 'domain' parameter. Description does not add extra meaning beyond schema; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Perform DNS lookup' and lists specific record types (A, MX, CNAME, NS, TXT), distinguishing it from sibling tools like check_ssl and crawl_website.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states use cases: 'email setup verification, DNS troubleshooting, or server infrastructure research.' No mention of when not to use, but context with siblings implies alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_sslA
Read-only
Inspect

Inspect SSL/TLS certificate details for a domain. Returns certificate issuer, expiration date, subject alternative names (SANs), key strength, and certificate chain validation status. Use for security audits, certificate renewal tracking, or compliance verification.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to check SSL certificate (e.g. 'example.com', 'api.example.com')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds behavioral details beyond annotations (readOnlyHint, openWorldHint) by listing the return values like issuer, expiration, SANs, etc. Annotations already indicate safety; description enriches understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first defines the core action and outputs, second lists use cases. Every word adds value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple single-parameter tool without output schema, the description thoroughly covers inputs, outputs, and use cases. No gaps in understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a well-described domain parameter. Description does not add additional semantics beyond what the parameter schema provides, so baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Inspect SSL/TLS certificate details for a domain' and lists specific certificate attributes returned. It clearly distinguishes from sibling tools like check_dns by focusing on SSL/TLS.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides usage contexts such as 'security audits, certificate renewal tracking, or compliance verification,' guiding the agent on when to use. It does not explicitly state when not to use but the context is clear given siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

crawl_websiteA
Read-only
Inspect

Crawl a website and extract structured content from all accessible pages. Returns page titles, meta descriptions, headings, body text, internal/external links, and page structure. Use for SEO audits, content inventory, site mapping, or data extraction for analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesWebsite URL to crawl (e.g. 'https://www.example.com', 'example.com')
max_pagesNoMaximum number of pages to crawl (default 10, higher for full site scans)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true and openWorldHint=true, so description carries lower burden. Description adds return details but does not disclose crawl limitations (e.g., default 10 pages, same-domain scope, robots.txt behavior) which could mislead. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first explains action and output, second lists use cases. Front-loaded with verb and structured content. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description covers purpose, output, and use cases adequately given no output schema and good annotations. But misses details on output format (e.g., each page as object), limitations, and behavior beyond 'all accessible pages' – the max_pages default is unmentioned. Could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both params. The description adds no extra parameter-level info beyond what the schema provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description starts with explicit verb 'crawl a website and extract structured content', specifies return values (titles, meta, etc.), and lists use cases (SEO audits, etc.). Clearly distinguishes from siblings like check_dns or check_ssl which are specific checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description states 'Use for SEO audits, content inventory, site mapping, or data extraction', giving clear when-to-use context. However, it does not explicitly mention when not to use or compare to siblings, but the distinct purpose implies exclusivity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_tech_stackA
Read-only
Inspect

Identify the technology stack and services used by a website. Returns framework names, CMS platform, JavaScript libraries, analytics services, CDN provider, hosting provider, and security tools detected. Use for competitive analysis, vendor intelligence, or understanding site architecture.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesWebsite URL to analyze (e.g. 'https://www.example.com')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint and openWorldHint. The description adds detail on return categories but lacks disclosure on latency, accessibility requirements, or caching. Adequate but not enhanced beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, only three sentences. Each sentence adds value: the first states purpose, the second lists returns, the third suggests usage. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains what the tool returns, which is key given no output schema. Could include behavioral notes like site accessibility requirements, but overall sufficiently complete for a read-only tool with annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description does not add extra meaning to the 'url' parameter beyond the schema's own description. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it identifies technology stacks and services, listing specific categories (frameworks, CMS, libraries, etc.). It also distinguishes from sibling tools (DNS, SSL, crawl, WHOIS) by being uniquely focused on tech detection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions use cases like competitive analysis and vendor intelligence. It does not provide when-not-to-use or alternatives, but siblings are sufficiently different to avoid confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_whoisA
Read-only
Inspect

Query WHOIS database for domain registration details. Returns registrant name, registrar, registration and expiration dates, registrant contact info, and nameserver list. Use for domain research, owner identification, or tracking registration status.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to look up in WHOIS (e.g. 'example.com', 'company.org')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint. The description adds no further behavioral details beyond listing return fields. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states action and result fields, second lists use cases. No unnecessary words, effectively front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description sufficiently covers return values (registrant name, registrar, dates, contact info, nameservers). For a simple one-parameter lookup, this is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter is fully documented in the schema. The description does not add new semantic information about the parameter beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it queries the WHOIS database for domain registration details, listing specific return fields. This clearly distinguishes it from sibling tools like check_dns, check_ssl, crawl_website, and detect_tech_stack.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases: 'domain research, owner identification, or tracking registration status.' While it doesn't contrast with siblings, it gives clear context for when to use the tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources