Skip to main content
Glama

Server Details

DNS MCP — DNS and network lookup tools

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-dns
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 7 of 7 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

The DNS-related tools (dns_lookup, dns_lookup_all, reverse_dns) have clear and distinct purposes with minimal overlap, but the memory tools (remember, recall, forget) are unrelated to DNS, creating a disjointed set. The discover_tools tool is also separate, making the overall toolset appear as three loosely connected groups rather than a cohesive whole.

Naming Consistency3/5

The DNS tools follow a consistent snake_case pattern (e.g., dns_lookup, reverse_dns), and the memory tools also use snake_case (remember, recall, forget). However, discover_tools deviates slightly by not including a verb prefix, and the naming conventions across the three groups (DNS, memory, discovery) are not unified, resulting in a mixed but readable overall pattern.

Tool Count4/5

With 7 tools, the count is reasonable and well-scoped for a server that appears to combine DNS operations with memory management and tool discovery. It's not excessive, and each tool serves a distinct function, though the combination of domains might feel slightly broad.

Completeness2/5

For the DNS domain, the tools cover lookups and reverse lookups adequately, but there are significant gaps such as creating, updating, or deleting DNS records, which limits functionality. The memory tools provide basic CRUD operations, but the discover_tools tool stands alone without clear integration, making the overall surface feel incomplete and patchy for a unified purpose.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that Pipeworx handles tool selection and argument filling, which adds useful behavioral context. However, it lacks details on permissions, rate limits, error handling, or response format, leaving gaps for a tool that performs complex backend operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core functionality. Each sentence adds value: the first explains the purpose, the second details the mechanism, and the third provides concrete examples. There is no wasted text, making it efficient and easy to understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (dynamic tool selection and execution) and lack of annotations or output schema, the description is somewhat incomplete. It explains the input mechanism well but omits details on output format, error cases, or limitations. While it covers basic usage, more context would help an agent anticipate behavior fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'question' well-documented in the schema. The description adds minimal semantic value by reiterating 'question or request in natural language' and providing examples, but does not go beyond what the schema already specifies. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes from siblings by emphasizing natural language input versus structured tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: for asking questions in plain English without needing to browse tools or learn schemas. It includes examples that illustrate appropriate use cases. However, it does not explicitly state when not to use it or name alternatives among siblings, such as when structured tool invocation might be preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it performs a search based on natural language queries and returns relevant tools. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured in two sentences. The first sentence explains the core functionality, and the second provides critical usage guidance. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search functionality with two parameters) and no annotations or output schema, the description provides good contextual coverage. It explains the purpose, usage context, and behavioral approach adequately, though it could benefit from mentioning what the return format looks like (since there's no output schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't elaborate on query formatting or limit implications). This meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search', 'Returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes from siblings by focusing on tool discovery rather than DNS operations, making its role explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This clearly specifies when to use it (large catalog scenarios) and implies alternatives are not needed initially, offering strong contextual direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_lookupBInspect

Look up a specific DNS record type for a domain. Specify record type (e.g., 'A', 'MX', 'TXT', 'CNAME'). Returns records with TTLs and data values.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoDNS record type to query (e.g., "A", "AAAA", "MX", "NS", "TXT", "CNAME", "SOA"). Defaults to "A".
domainYesDomain name to look up (e.g., "example.com", "mail.google.com")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the method (Google DNS-over-HTTPS) and return format (records with TTLs and data), but lacks details on error handling, rate limits, authentication needs, or whether it's read-only. For a tool with no annotations, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and method, followed by output details. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple input schema with full coverage, the description covers the basic purpose and method adequately. However, for a tool with no structured safety or output info, it should ideally include more on behavioral aspects like error cases or response format details to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema, mentioning 'requested type' and 'domain' without providing additional syntax or format details. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Look up DNS records') and resource ('for a domain'), specifying the method ('using Google DNS-over-HTTPS') and output ('Returns records of the requested type with TTLs and data values'). It distinguishes from 'reverse_dns' but not explicitly from 'dns_lookup_all', which might offer broader functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for DNS queries with specific record types, but does not explicitly state when to use this tool versus alternatives like 'dns_lookup_all' (which might return all record types) or 'reverse_dns' (for reverse lookups). It provides basic context without exclusions or clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_lookup_allAInspect

Query all major DNS record types (A, AAAA, MX, NS, TXT, CNAME) for a domain in one call. Returns results grouped by type with TTLs and values.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to look up (e.g., "example.com")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the query behavior ('queries... simultaneously') and output format ('returns all results grouped by type'), but lacks details on error handling, rate limits, authentication needs, or network dependencies. For a tool with no annotations, this leaves significant behavioral traits undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two efficient sentences that directly convey the tool's functionality and output. Every sentence earns its place by specifying the multi-record lookup and result grouping without unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description adequately covers the core purpose and output format. However, it lacks details on behavioral aspects like error conditions or performance, which are important for a network-dependent tool. The description is complete enough for basic use but has gaps for robust agent operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'domain' well-documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints. Given the high schema coverage, a baseline score of 3 is appropriate as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('look up multiple DNS record types') and resources ('for a domain'), and explicitly distinguishes it from the sibling 'dns_lookup' by emphasizing the multi-record query capability ('in one call', 'simultaneously'). This provides clear differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests usage when multiple DNS record types are needed at once ('A, AAAA, MX, NS, TXT, and CNAME records simultaneously'), which contrasts with the sibling 'dns_lookup' likely for single-type queries. However, it does not explicitly state when NOT to use this tool or name alternatives, leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes a memory, which implies a destructive mutation, but doesn't address critical aspects like whether deletion is permanent, what happens if the key doesn't exist, or any permission requirements. This leaves significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action ('Delete'), making it immediately clear and appropriately sized for a simple tool with one parameter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation tool with no annotations and no output schema, the description is insufficient. It lacks details on behavioral outcomes (e.g., success/error responses), side effects, or integration with sibling tools, leaving the agent with incomplete context for reliable invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'key' fully documented in the schema as 'Memory key to delete'. The description adds no additional semantic context beyond what the schema provides, such as key format examples or constraints, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the destructive action distinguishes it from read operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While the description implies deletion of stored memories, it doesn't specify prerequisites (e.g., whether the key must exist), error conditions, or relationships with sibling tools like 'remember' (for creation) or 'recall' (for retrieval).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that it retrieves memories stored in current or previous sessions, which is useful context. However, it doesn't mention potential limitations like memory size, retrieval speed, or error handling for invalid keys, leaving behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core functionality in the first sentence, followed by usage context. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and parameter semantics adequately. However, without annotations or output schema, it could benefit from more detail on return format or error cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the semantics: omitting the key lists all memories, while providing a key retrieves a specific memory. This clarifies the optional parameter's behavior beyond the schema's technical description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('retrieve') and resource ('previously stored memory'), distinguishing it from siblings like 'remember' (store) and 'forget' (delete). It explicitly mentions retrieving by key or listing all memories, providing precise functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to retrieve context you saved earlier') and provides clear usage guidance: 'omit key to list all stored memories.' It distinguishes from siblings by focusing on retrieval rather than storage or deletion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the persistence differences between authenticated users ('persistent memory') and anonymous sessions ('last 24 hours'), and the tool's purpose for cross-tool context. It lacks details on potential limitations (e.g., storage size, rate limits) but covers essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage context and behavioral details. Every sentence adds value without redundancy, and it efficiently conveys necessary information in three concise sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (storage with persistence rules), no annotations, and no output schema, the description does well by explaining the tool's behavior and usage. It could improve by mentioning what happens on overwrites or error conditions, but it covers the essential context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already fully documents both parameters. The description does not add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't explain key naming conventions or value formatting further). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'forget' (remove) and 'recall' (retrieve). It provides concrete examples of what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), providing clear context. However, it does not mention when not to use it or explicitly name alternatives (e.g., 'recall' for retrieval), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reverse_dnsAInspect

Find the hostname for an IP address via reverse DNS lookup. Returns the PTR record if available.

ParametersJSON Schema
NameRequiredDescriptionDefault
ipYesIPv4 address to reverse-lookup (e.g., "8.8.8.8")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the return value ('Returns the PTR record (hostname) associated with the IP, if one exists'), which is useful, but lacks details on error handling, rate limits, authentication needs, or network behavior. It adds some value but is incomplete for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose, and the second explains the return value. It is front-loaded and appropriately sized for a simple tool, with every sentence earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but has gaps. It explains the basic operation and return value, but without annotations or output schema, it should ideally cover more behavioral aspects like error cases or performance. It meets minimum viability but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents the single parameter 'ip'. The description does not add any parameter-specific details beyond what the schema provides (e.g., it doesn't clarify format constraints or examples). Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Perform a reverse DNS lookup') and resource ('for an IP address'), distinguishing it from sibling tools like 'dns_lookup' and 'dns_lookup_all' which likely perform forward DNS lookups. It precisely defines the operation without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings ('dns_lookup', 'dns_lookup_all'), nor does it mention any prerequisites, exclusions, or alternative scenarios. It states what the tool does but offers no contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.