Skip to main content
Glama

ramadasmr-networkcalc-mcp

Server Details

Look up DNS information for any domain to troubleshoot issues and gather insights. Get fast, relia…

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ramadasmr/networkcalc-mcp
GitHub Stars
1
Server Listing
NetworkCalc MCP Server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
calculate_subnetCInspect

Fetch Subnet info for a given ipaddress/subnet/CIDR

ParametersJSON Schema
NameRequiredDescriptionDefault
subnetYesIP address, subnet, or CIDR notation
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full behavioral disclosure burden. It fails to indicate whether this performs local calculation or external queries, what the return format contains, or error handling for invalid CIDR notation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 8 words in a single sentence. Every word earns its place with the action verb front-loaded. No redundant or filler text present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with no output schema, the description inadequately describes the return value. 'Subnet info' is ambiguous—users need to know if this returns network/broadcast addresses, usable IP ranges, or mask conversions to select this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'subnet' parameter fully documented as accepting 'IP address, subnet, or CIDR notation'. The description essentially mirrors this information without adding syntax examples, validation rules, or format preferences, meeting the baseline for well-covered schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Fetch') and identifies the resource ('Subnet info'), and specifies acceptable input formats (ipaddress/subnet/CIDR). However, it lacks specificity about what 'info' entails (network ranges, masks, host counts) and does not differentiate from sibling network tools like whois_lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. Given siblings include other network utilities (dns_lookup, whois_lookup, certificate_info), the description should clarify this performs IP math/calculation rather than external lookups or domain queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

certificate_infoCInspect

Fetch certificate for a given domain

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It fails to disclose what certificate data is returned (PEM format, parsed fields, chain details), which port/protocol is used (443/HTTPS), or validation behavior (expired/self-signed certificates).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is appropriately brief and front-loaded with the core action. However, extreme brevity contributes to the lack of behavioral transparency and completeness, preventing a score of 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema or annotations, the description should indicate what certificate details are returned (issuer, expiration, SANs, full chain). It also omits error handling behavior and authentication requirements, leaving significant gaps for a security-related tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Domain to look up'), the baseline is 3. The description mentions 'given domain' but adds no additional semantic context such as expected format (FQDN, exclude protocol) or examples that would help the agent construct valid inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Fetch') and identifies the resource ('certificate') and scope ('for a given domain'). It distinguishes sufficiently from siblings like dns_lookup and whois_lookup by specifying 'certificate', though it could explicitly mention SSL/TLS to remove ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like dns_lookup or whois_lookup, which also accept domain parameters. No prerequisites (e.g., port availability, protocol requirements) or error conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dns_lookupCInspect

Fetch DNS info for a given domain

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to specify what DNS record types are returned (A, MX, NS, etc.), whether the lookup is recursive, rate limiting, or error handling for non-existent domains. The word 'Fetch' implies read-only access but does not explicitly confirm safety or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief (6 words) and front-loaded with the core action. While it avoids verbosity, it may be excessively minimal—lacking any supporting context about output format or scope that would help an agent predict the tool's utility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should ideally characterize the returned DNS information (record types, structure). Without annotations to provide safety hints or output metadata, the description leaves significant gaps in the agent's understanding of the tool's full contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'domain' parameter, the schema adequately documents the input. The description provides minimal additional semantic context beyond the schema's 'Domain to look up', meeting the baseline expectation but not adding clarifying examples or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Fetch) and resource (DNS info for a domain). However, it does not explicitly distinguish from the sibling tool spf_lookup, which also performs DNS lookups but for a specific record type. A higher score would clarify this distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like spf_lookup (for SPF records specifically) or certificate_info. There are no stated prerequisites, exclusions, or conditions that would help an agent select this tool appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

spf_lookupBInspect

Fetch SPF info for a given domain or host

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It fails to specify what 'SPF info' entails (raw TXT record vs parsed policy), error handling when no SPF record exists, or whether it follows include/redirect mechanisms. It only states the action without explaining the result.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 8 words. Single sentence with no filler. The description is front-loaded with the verb and immediately identifies the resource type. No structural improvements possible without adding missing content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool, the description covers the basic intent but remains minimal. Given the absence of an output schema, the description should ideally specify what the return value contains (record string, validation result, etc.). It meets minimum viability but leaves operational gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the 'domain' parameter is documented in the schema). The description mentions 'domain or host' which aligns with the parameter, but adds no additional semantic value regarding accepted formats (e.g., whether subdomains are handled differently) beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Fetch[es] SPF info' using a specific verb and resource type. It specifies the scope ('for a given domain or host'), implicitly distinguishing it from general DNS lookups (sibling dns_lookup) by focusing specifically on SPF records rather than generic record types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given the sibling dns_lookup tool could theoretically retrieve TXT records (which contain SPF data), the description should clarify when to use this specialized tool versus the general DNS lookup.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whois_lookupCInspect

Fetch WHOIS info for a given domain

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to address error handling (e.g., domain not found), rate limiting, privacy-protected/REDACTED fields common in modern WHOIS, or the structure of returned data. It only restates the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at five words with no redundancy. However, given the lack of annotations and output schema, this brevity leaves significant informational gaps that slightly undermine its structural value despite the efficient phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with full schema coverage, the description minimally suffices. However, given the absence of an output schema and annotations, it inadequately prepares the agent for common WHOIS complexities such as varying TLD formats, privacy redaction, or connection timeouts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('Domain to look up'), establishing baseline adequacy. The description adds no additional semantic context about expected formats (e.g., 'example.com' vs 'www.example.com'), validation rules, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Fetch') and specific resource ('WHOIS info'), accurately describing the tool's function. However, it does not explicitly differentiate from sibling network tools like dns_lookup or certificate_info, leaving the agent to infer that WHOIS provides ownership/registrar data rather than DNS resolution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like dns_lookup, certificate_info, or spf_lookup. Given the overlapping network diagnostic context, explicit guidance on choosing WHOIS for registrar/ownership data versus DNS for resolution would be necessary for a higher score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.