Skip to main content
Glama

Server Details

DNS MCP — DNS and network lookup tools

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-dns
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: dns_lookup queries specific record types, dns_lookup_all queries multiple predefined types simultaneously, and reverse_dns performs PTR lookups for IP addresses. There is no overlap or ambiguity between these functions.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear verb_noun structure (dns_lookup, dns_lookup_all, reverse_dns). The naming is predictable and readable throughout the set.

Tool Count5/5

With 3 tools, this server is well-scoped for DNS operations, covering forward lookups (single and multi-type) and reverse lookups. Each tool earns its place without feeling thin or excessive for the domain.

Completeness4/5

The tools cover core DNS query operations effectively, including forward and reverse lookups. A minor gap exists in not providing explicit tools for less common record types (e.g., SOA, SRV) or dynamic updates, but agents can work around this using the existing tools.

Available Tools

3 tools
dns_lookupBInspect

Look up DNS records for a domain using Google DNS-over-HTTPS. Returns records of the requested type with TTLs and data values.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoDNS record type to query (e.g., "A", "AAAA", "MX", "NS", "TXT", "CNAME", "SOA"). Defaults to "A".
domainYesDomain name to look up (e.g., "example.com", "mail.google.com")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the method (Google DNS-over-HTTPS) and return format (records with TTLs and data), but lacks details on error handling, rate limits, authentication needs, or whether it's read-only. For a tool with no annotations, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and method, followed by output details. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple input schema with full coverage, the description covers the basic purpose and method adequately. However, for a tool with no structured safety or output info, it should ideally include more on behavioral aspects like error cases or response format details to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema, mentioning 'requested type' and 'domain' without providing additional syntax or format details. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Look up DNS records') and resource ('for a domain'), specifying the method ('using Google DNS-over-HTTPS') and output ('Returns records of the requested type with TTLs and data values'). It distinguishes from 'reverse_dns' but not explicitly from 'dns_lookup_all', which might offer broader functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for DNS queries with specific record types, but does not explicitly state when to use this tool versus alternatives like 'dns_lookup_all' (which might return all record types) or 'reverse_dns' (for reverse lookups). It provides basic context without exclusions or clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_lookup_allAInspect

Look up multiple DNS record types for a domain in one call. Queries A, AAAA, MX, NS, TXT, and CNAME records simultaneously and returns all results grouped by type.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain name to look up (e.g., "example.com")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the query behavior ('queries... simultaneously') and output format ('returns all results grouped by type'), but lacks details on error handling, rate limits, authentication needs, or network dependencies. For a tool with no annotations, this leaves significant behavioral traits undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two efficient sentences that directly convey the tool's functionality and output. Every sentence earns its place by specifying the multi-record lookup and result grouping without unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description adequately covers the core purpose and output format. However, it lacks details on behavioral aspects like error conditions or performance, which are important for a network-dependent tool. The description is complete enough for basic use but has gaps for robust agent operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'domain' well-documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints. Given the high schema coverage, a baseline score of 3 is appropriate as the description does not compensate but also does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('look up multiple DNS record types') and resources ('for a domain'), and explicitly distinguishes it from the sibling 'dns_lookup' by emphasizing the multi-record query capability ('in one call', 'simultaneously'). This provides clear differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests usage when multiple DNS record types are needed at once ('A, AAAA, MX, NS, TXT, and CNAME records simultaneously'), which contrasts with the sibling 'dns_lookup' likely for single-type queries. However, it does not explicitly state when NOT to use this tool or name alternatives, leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reverse_dnsAInspect

Perform a reverse DNS lookup for an IP address. Returns the PTR record (hostname) associated with the IP, if one exists.

ParametersJSON Schema
NameRequiredDescriptionDefault
ipYesIPv4 address to reverse-lookup (e.g., "8.8.8.8")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the return value ('Returns the PTR record (hostname) associated with the IP, if one exists'), which is useful, but lacks details on error handling, rate limits, authentication needs, or network behavior. It adds some value but is incomplete for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose, and the second explains the return value. It is front-loaded and appropriately sized for a simple tool, with every sentence earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but has gaps. It explains the basic operation and return value, but without annotations or output schema, it should ideally cover more behavioral aspects like error cases or performance. It meets minimum viability but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents the single parameter 'ip'. The description does not add any parameter-specific details beyond what the schema provides (e.g., it doesn't clarify format constraints or examples). Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Perform a reverse DNS lookup') and resource ('for an IP address'), distinguishing it from sibling tools like 'dns_lookup' and 'dns_lookup_all' which likely perform forward DNS lookups. It precisely defines the operation without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings ('dns_lookup', 'dns_lookup_all'), nor does it mention any prerequisites, exclusions, or alternative scenarios. It states what the tool does but offers no contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.