Skip to main content
Glama

nslookup

Server Details

DNS lookups, health reports, SSL certs, security scans, GEO scoring, uptime checks

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
NsLookup-io/nslookup-mcp
GitHub Stars
19
Server Listing
nslookup.io MCP Server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 11 of 11 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between dns_lookup and dns_record, as both handle DNS record queries, which could cause confusion. However, the descriptions clarify that dns_lookup retrieves common records broadly, while dns_record targets specific types, helping to mitigate misselection.

Naming Consistency4/5

Tool names generally follow a consistent snake_case pattern with descriptive verb_noun combinations, such as dns_lookup and ssl_certificate. Minor deviations include geo_checker (which uses 'checker' instead of a verb) and webservers (plural noun without a verb), but overall the naming is predictable and readable.

Tool Count5/5

With 11 tools, the count is well-scoped for a DNS and domain analysis server, covering a comprehensive range of functions from basic lookups to advanced audits and security scans. Each tool serves a specific purpose, and there is no bloat or missing coverage for the domain's scope.

Completeness5/5

The tool set provides complete coverage for DNS and domain-related operations, including lookups, propagation checks, health audits, security scans, SSL certificates, uptime monitoring, and specialized checks like BIMI and GEO optimization. No obvious gaps exist, enabling agents to handle full workflows without dead ends.

Available Tools

11 tools
bimi_vmcAInspect

Check BIMI (Brand Indicators for Message Identification) and VMC (Verified Mark Certificate) for a domain. Returns BIMI DNS record status, VMC certificate details, logo URL, trademark info, and expiry.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to check BIMI/VMC for (e.g. google.com)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return data (BIMI DNS record status, VMC certificate details, etc.) but does not mention potential errors, rate limits, authentication needs, or whether the operation is read-only (though implied by 'check').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and efficiently lists return values in the second. Every sentence adds value with zero waste, making it appropriately sized and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description partially compensates by detailing return values. However, for a tool with behavioral complexity (e.g., external checks, potential failures), it lacks information on error handling or operational constraints, leaving gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the domain parameter well-documented in the schema. The description adds no additional parameter details beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check BIMI and VMC for a domain') and resource ('domain'), distinguishing it from sibling tools like dns_lookup or ssl_certificate by focusing on brand verification metrics rather than general DNS or SSL checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for domain verification scenarios but does not explicitly state when to use this tool versus alternatives like dns_record or security_scan. It provides context but lacks explicit guidance on exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_healthAInspect

Run a comprehensive DNS health audit on a domain — 39 checks across 7 categories: DNSSEC (chain of trust, algorithms, validation), MX & email (PTR, MTA-STS, redundancy), DNS hygiene (SPF conflicts, wildcards, apex CNAME), TTL & SOA configuration, nameserver setup (diversity, lame delegation, EDNS0), CAA certificates, and operational maturity (security.txt, abuse mailbox). Returns an overall severity-weighted score (0–100) plus per-category scores.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to check DNS health for (e.g. example.com)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (comprehensive audit with 39 checks across 7 categories) and what it returns (overall severity-weighted score plus per-category scores). It doesn't mention rate limits, authentication needs, or potential side effects, but provides substantial operational context for a read-only analysis tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that front-loads the core purpose and then provides specific details about check categories and return values. Every element serves a purpose: the comprehensive nature, specific check categories, and output format are all essential information with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description provides excellent context about what the audit entails and what it returns. It covers the scope (39 checks across 7 categories), specific check types, and output format (severity-weighted scores). The main gap is lack of explicit behavioral constraints like rate limits or error conditions, but overall it's quite complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with a clear parameter description for 'domain'. The tool description doesn't add any additional parameter information beyond what's in the schema, but since schema coverage is complete, the baseline score of 3 is appropriate. The description focuses on the audit scope rather than parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Run a comprehensive DNS health audit') and resource ('on a domain'), distinguishing it from sibling tools like dns_lookup or dns_record by emphasizing the comprehensive audit nature with 39 checks across 7 categories. It provides concrete details about what the audit entails rather than just restating the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it's for DNS health auditing on domains, which helps differentiate from tools like ssl_certificate or security_scan. However, it doesn't explicitly state when NOT to use this tool or name specific alternatives among siblings, leaving some ambiguity about tool selection scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_lookupAInspect

Look up all common DNS records (A, AAAA, NS, MX, TXT, CNAME, SOA) for a domain. Returns results from a specified DNS server.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to look up (e.g. example.com)
serverNoDNS server to query. Default: cloudflare. Use 'authoritative' for the domain's own nameservers.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: returns results from a specified DNS server (including default and authoritative option), and lists the exact record types returned. However, it doesn't mention error handling, rate limits, authentication needs, or response format details that would be helpful for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence states purpose and scope, second sentence adds crucial behavioral detail about DNS server specification. Every word earns its place, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only DNS lookup tool with 2 parameters (100% schema coverage) but no output schema, the description is adequate but has gaps. It explains what the tool does and server options well, but doesn't describe the return format, error conditions, or limitations. Given the lack of annotations and output schema, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the server parameter's purpose ('DNS server to query') and providing context about the 'authoritative' option, which goes beyond the enum list in the schema. However, it doesn't elaborate on domain parameter semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('look up'), resource ('DNS records'), and scope ('all common DNS records' with explicit list: A, AAAA, NS, MX, TXT, CNAME, SOA). It distinguishes from siblings like dns_record (likely single record type) and dns_health/dns_propagation (different DNS functions).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for comprehensive DNS record lookup. It doesn't explicitly state when NOT to use it or name alternatives, but the specificity implies it's for bulk DNS queries rather than single-record checks (dns_record) or health/propagation monitoring (dns_health, dns_propagation).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_propagationAInspect

Check DNS propagation for a domain across 18+ global DNS servers (Cloudflare, Google, Quad9, OpenDNS, regional servers, and authoritative nameservers). Shows if DNS changes have propagated worldwide.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to check propagation for (e.g. example.com)
recordTypeYesDNS record type to check (e.g. A, AAAA, MX, NS, TXT, CNAME)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the tool's scope (18+ global servers) and purpose (propagation checking), but doesn't mention behavioral aspects like rate limits, authentication requirements, timeout behavior, or what specific output format to expect. The description is accurate but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence establishes purpose and scope efficiently. The second sentence adds crucial context about the tool's use case. Every word earns its place, and the most important information (what the tool does) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no annotations and no output schema, the description provides adequate basic context about what the tool does and when to use it. However, it lacks details about the return format, error conditions, or operational constraints that would be helpful for an AI agent to use this tool effectively. The description is complete enough for basic understanding but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete parameter documentation. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'domain' and 'DNS record type' generally but provides no additional syntax, format, or usage guidance for parameters. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check DNS propagation'), target resource ('for a domain'), and scope ('across 18+ global DNS servers'). It distinguishes from siblings like dns_lookup or dns_record by focusing specifically on propagation status rather than general DNS queries or record management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('Shows if DNS changes have propagated worldwide'), which implicitly suggests it's for post-change verification. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools for different DNS-related tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_recordCInspect

Look up a specific DNS record type for a domain. Supports 53 record types including A, AAAA, MX, TXT, CNAME, SOA, PTR, CAA, SRV, DNSKEY, DS, TLSA, HTTPS, SPF, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesDNS record type (e.g. A, MX, TXT, CNAME, SPF, HTTPS, DNSKEY)
domainYesDomain name (or IP address for PTR lookups) to query (e.g. example.com)
serverNoDNS server to query. Default: cloudflare. Use 'authoritative' for the domain's own nameservers.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool performs a lookup operation, implying it's read-only and non-destructive, but doesn't explicitly confirm this or detail other behaviors like rate limits, authentication needs, error handling, or what the output looks like (e.g., raw DNS data). For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first in a single sentence. The second sentence adds useful context about supported record types without being verbose. However, it could be slightly more structured by explicitly separating the purpose from the feature list, but it remains efficient with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is incomplete. It lacks details on behavioral traits (e.g., read-only nature, potential errors), output format, and usage guidelines relative to siblings. While the purpose is clear, the description doesn't compensate for the absence of annotations and output schema, making it inadequate for full agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it lists example record types (e.g., A, AAAA, MX) and implies support for many more, but doesn't provide additional syntax, format details, or usage context for the parameters. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Look up a specific DNS record type for a domain.' It specifies the verb ('look up') and resource ('DNS record type for a domain'), and distinguishes it from siblings like dns_lookup or dns_propagation by focusing on specific record types. However, it doesn't explicitly differentiate from dns_lookup (which might be more general), keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions supporting 53 record types but doesn't explain when to choose this over siblings like dns_lookup (which might handle multiple records) or dns_health. There's no mention of prerequisites, exclusions, or specific use cases, leaving the agent to infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geo_checkerAInspect

Check a domain's GEO (Generative Engine Optimization) score — how well the site is optimized for AI search engines like ChatGPT, Gemini, Claude, and Perplexity. Returns three scores (Technical Readiness, Entity Readiness, Answer Readiness), AI crawler access status, structured data analysis, and prioritized recommendations.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to check GEO score for (e.g. github.com)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's function and outputs but lacks details on behavioral traits such as rate limits, authentication needs, or potential side effects (e.g., whether it performs active scanning or uses cached data). The description does not contradict annotations (none exist), but it provides only basic operational context without deeper behavioral insights.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficiently structured in a single sentence that covers the tool's purpose, outputs, and key details without unnecessary words. Every element (e.g., the three scores, AI crawler status, recommendations) serves to clarify functionality, making it concise and well-organized for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (assessing multiple readiness scores and analyses) and the absence of both annotations and an output schema, the description provides a good overview but lacks completeness. It mentions return values but does not detail their structure or format, and it omits behavioral aspects like performance or limitations. For a tool with no structured output documentation, more context on results would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'domain' clearly documented in the schema. The description adds value by specifying the type of domain analysis (GEO score for AI search engines) and providing an example ('github.com'), which enhances understanding beyond the schema's basic definition. Since there is only one parameter, the baseline is high, and the description effectively complements it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check', 'returns') and resources ('domain's GEO score'), explicitly listing what it assesses (Technical Readiness, Entity Readiness, Answer Readiness) and the outputs (scores, AI crawler access status, structured data analysis, recommendations). It distinctly differentiates from sibling tools like DNS or security tools by focusing on AI search engine optimization rather than network or security metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for evaluating AI search engine optimization, but it does not explicitly state when to use this tool versus alternatives (e.g., when to choose geo_checker over dns_lookup or security_scan). No exclusions or prerequisites are mentioned, leaving the context somewhat vague beyond the implied domain analysis scenario.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

security_scanBInspect

Run a security scan on a domain to detect DNS misconfigurations, missing SPF/DKIM/DMARC records, cookie security issues, and other web security vulnerabilities. Returns findings with severity levels (critical, high, medium, low, info).

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to security scan (e.g. example.com)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the tool 'returns findings with severity levels', it doesn't describe important behavioral aspects like whether this is a read-only operation, whether it makes external network calls, potential rate limits, authentication requirements, or what happens if the domain is invalid. The description provides basic output format but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the action and scope, the second describes the return format. It's appropriately sized for a single-parameter tool, though it could be slightly more front-loaded by mentioning the severity levels earlier. There's minimal wasted language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides adequate basic information about what the tool does and what it returns. However, it lacks important context about operational behavior, error conditions, and how the severity levels should be interpreted. The description is complete enough to understand the tool's purpose but insufficient for fully informed usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't add any parameter-specific information beyond what's already in the schema, which has 100% coverage with a well-documented 'domain' parameter. The baseline score of 3 reflects that the schema adequately documents the single parameter, so the description doesn't need to compensate but also doesn't add value regarding parameter usage or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Run a security scan') on a specific resource ('domain') and lists the types of vulnerabilities detected (DNS misconfigurations, SPF/DKIM/DMARC records, cookie security, web security vulnerabilities). It distinguishes itself from sibling tools like dns_lookup or ssl_certificate by focusing on comprehensive security vulnerability assessment rather than specific DNS or SSL checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this comprehensive security scan is preferred over more targeted sibling tools like dns_health or ssl_certificate, nor does it specify prerequisites, exclusions, or appropriate contexts for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ssl_certificateAInspect

Check the SSL/TLS certificate for a domain. Returns issuer, expiry date, days until expiry, certificate chain validity, cipher strength, SAN domains, fingerprint, and TLS protocol version.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to check SSL certificate for (e.g. github.com)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the return data but does not mention operational aspects such as rate limits, network dependencies, error handling, or whether this is a read-only operation. For a tool with no annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a concise list of returned data. Every sentence adds value without redundancy, making it efficiently structured and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no output schema, no annotations), the description adequately covers the purpose and return values. However, it lacks details on behavioral traits like error conditions or performance, which would enhance completeness for a network-dependent tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'domain' well-documented in the schema. The description does not add any parameter-specific details beyond what the schema provides, such as format examples or constraints, so it meets the baseline for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check the SSL/TLS certificate for a domain') and resource ('domain'), distinguishing it from sibling tools like dns_lookup or security_scan by focusing exclusively on SSL/TLS certificate inspection. It provides a comprehensive list of returned information, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (when you need SSL/TLS certificate details for a domain) but does not explicitly state when to use this tool versus alternatives like security_scan or dns_health. It lacks guidance on prerequisites, exclusions, or specific scenarios where this tool is preferred over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uptime_checkAInspect

Perform a one-time HTTP uptime check on a URL from a single location. Returns whether the site is up or down, HTTP status code, and response time in milliseconds. For multi-location checks, use uptime_check_multi instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL to check (e.g. https://github.com)
timeoutNoTimeout in milliseconds (default: 10000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the core behavior (one-time HTTP check, returns status, code, response time) and mentions the sibling alternative, but lacks details on error handling, authentication needs, rate limits, or what constitutes 'up' vs. 'down'. While adequate for basic understanding, it misses advanced operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first sentence states the purpose and output, and the second provides crucial sibling differentiation. Every word earns its place, and the most important information (what it does) is front-loaded, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, and no output schema, the description is mostly complete. It covers purpose, output format, and sibling differentiation. However, without annotations or output schema, it could better explain behavioral nuances like error cases or response structure. Given the simplicity, it's largely adequate but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description does not add any parameter-specific information beyond what's in the schema (e.g., it doesn't clarify URL format constraints or timeout implications). This meets the baseline expectation when the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Perform a one-time HTTP uptime check'), resource ('on a URL'), and scope ('from a single location'). It explicitly distinguishes this tool from its sibling 'uptime_check_multi' by contrasting single-location vs. multi-location checks, making the purpose unambiguous and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Perform a one-time HTTP uptime check on a URL from a single location') and when to use an alternative ('For multi-location checks, use uptime_check_multi instead'). This directly addresses the key decision point between this tool and its sibling, offering clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uptime_check_multiAInspect

Check if a website is up or down from 7 global locations simultaneously: Amsterdam, Sydney, London, Frankfurt, Delhi, Warsaw, and South Carolina. Returns status, response time, and HTTP status code for each location.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFull URL to check (e.g. https://github.com)
timeoutNoTimeout in milliseconds (default: 30000)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it performs simultaneous checks from 7 specific locations, returns status/response time/HTTP code per location, and implies network operations. However, it doesn't mention rate limits, authentication needs, or error handling for invalid URLs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: first states the action and scope, second specifies the return values. Every word earns its place with zero redundant information, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides good context about what the tool does and returns. It covers the multi-location checking behavior and output format adequately, though it could benefit from mentioning error cases or response structure details given the absence of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (url and timeout). The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for adequate coverage through structured data alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('check if a website is up or down') and resources ('from 7 global locations'), and distinguishes it from the sibling 'uptime_check' by specifying multi-location simultaneous checking. It explicitly mentions what it returns (status, response time, HTTP status code for each location).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for website uptime monitoring from multiple locations, but doesn't explicitly state when to use this tool versus alternatives like 'uptime_check' (presumably single-location) or other siblings. It provides context (global locations) but lacks explicit guidance on when-not-to-use or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

webserversBInspect

Get the IP addresses (both IPv4 and IPv6) for a domain by looking up A and AAAA records. Also returns the punycode and unicode domain representations.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to look up IP addresses for (e.g. example.com)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It describes the lookup operation and return values but omits critical details like error handling, rate limits, network dependencies, or whether this is a read-only operation. For a network tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that efficiently convey the tool's purpose and additional return values. Every word earns its place, and the information is front-loaded with the core functionality stated first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter lookup tool with no output schema, the description adequately covers the basic operation and return values. However, it lacks important context about network behavior, error conditions, and how it differs from sibling DNS tools, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single 'domain' parameter. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the IP addresses'), resource ('for a domain'), and scope ('by looking up A and AAAA records'), plus additional return values ('punycode and unicode domain representations'). It distinguishes itself from sibling DNS-related tools like dns_lookup or dns_record by specifying the exact record types queried.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like dns_lookup or dns_record, nor does it mention any prerequisites or exclusions. It simply states what the tool does without contextual usage information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.