Skip to main content
Glama

drwho.me developer tools

Server Details

Remote MCP server: 10 developer utilities (base64, JWT, DNS, UUID, URL, JSON, UA, IP lookup).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
hikmahtech/drwhome
GitHub Stars
1
Server Listing
drwhome

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 14 of 14 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Every tool has a clearly distinct purpose, and descriptions explicitly guide when to use each (e.g., dns_lookup vs. dossier_dns). No overlapping or ambiguous tools.

Naming Consistency4/5

Most tools follow the dossier_<check> pattern consistently, but non-dossier tools mix 'lookup' and 'parse' suffixes (dns_lookup, ip_lookup vs. user_agent_parse). Minor deviation from uniformity.

Tool Count5/5

14 tools are well-scoped for a domain health and networking utility. Each tool serves a specific, non-redundant function, and the count feels appropriate for the stated purpose.

Completeness4/5

Covers DNS, email auth, TLS, CORS, redirects, headers, web surface, and basic network lookups. Minor gaps like WHOIS or DNSSEC are absent, but core domain health checks are complete.

Available Tools

14 tools
dns_lookupBInspect

Context lookup: Resolve a single DNS record type (A, AAAA, MX, TXT, NS, CNAME, SOA, CAA, or SRV) and return the raw answers. Use for quick, targeted lookups of one record type; prefer dossier_dns for a full multi-type DNS audit in parallel, or dossier_full for a complete domain health check. Queries Cloudflare DoH (1.1.1.1/dns-query) over HTTPS, follows CNAME chains, 5 s timeout. Returns a JSON array of answer objects with name, type, and data fields. On error, returns a string describing the DNS failure.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesDomain name or hostname to resolve, e.g. example.com or mail.example.com. FQDN preferred; relative labels are accepted.
typeYesDNS record type to query. Common choices: A (IPv4), AAAA (IPv6), MX (mail), TXT (SPF/DKIM/verification), NS (nameservers), CNAME (alias).
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions the method ('via Cloudflare DoH') but doesn't disclose critical traits like rate limits, error handling, response format, or whether it's a read-only operation (implied but not stated). This leaves significant gaps for agent understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. Every word earns its place by specifying the action, resource, and method concisely.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with no output schema and no annotations, the description is minimally adequate. It covers the basic purpose and method but lacks details on return values, error conditions, or operational constraints, leaving the agent with incomplete context for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter documentation in the schema itself. The description adds no additional parameter semantics beyond what's already in the schema (e.g., no examples or edge cases), so it meets the baseline for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Resolve a DNS record') and resource (DNS records via Cloudflare DoH), listing the exact record types supported. It distinguishes itself from siblings like ip_lookup by specifying DNS resolution rather than IP geolocation or other data transformations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While it implicitly suggests usage for DNS resolution, there's no mention of prerequisites (e.g., internet connectivity), limitations (e.g., rate limits), or comparisons to other DNS tools that might exist elsewhere.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_corsAInspect

Core dossier check: Send a CORS preflight OPTIONS request to https:/// and return the access-control-* response headers. Use to verify CORS policy for a specific origin-method pair, or to check whether a domain allows cross-origin requests; provide origin and method to simulate a precise preflight, or omit to use defaults (origin: https://drwho.me, method: GET). Single OPTIONS request via fetch, 5 s timeout. Returns a CheckResult: on success, {status:"ok", headers:{access-control-allow-origin,...}}; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
methodNoAccess-Control-Request-Method header value, e.g. POST or PUT. Defaults to GET if omitted.
originNoOrigin header value to include in the preflight, e.g. https://app.example.com. Defaults to https://drwho.me if omitted.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden of behavioral disclosure. It states what the tool does but doesn't describe important behaviors: whether it makes external HTTP requests, potential rate limits, error handling, timeout behavior, or what happens with invalid domains. For a tool that performs network operations, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence states core purpose and output, second sentence covers optional parameters. Perfectly front-loaded with the essential information first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, but no annotations or output schema, the description is adequate but could be more complete. It explains what the tool does but lacks information about response format, error conditions, or network behavior that would be helpful given this performs external HTTP requests.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters well. The description adds value by clarifying that origin and method are optional (not just from schema's lack of 'required'), and provides context about their purpose in CORS preflight requests, going beyond the schema's technical descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Send a CORS preflight OPTIONS'), target resource ('to https://<domain>/'), and output ('return the access-control-* headers'). It distinguishes from sibling tools like dossier_headers or dossier_web_surface by focusing specifically on CORS preflight testing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (testing CORS configuration) but doesn't explicitly state when to use this vs. alternatives like dossier_headers for general header inspection. It mentions optional parameters but provides no guidance on when they're needed or when other tools might be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_dkimAInspect

Core dossier check: Probe a domain's DKIM public keys by querying ._domainkey. for each selector. Use to verify signing configuration or discover active selectors; supply selectors when you know the ESP's selector, or omit to probe six common selectors (default, google, k1, selector1, selector2, mxvault). Issues parallel Cloudflare DoH (1.1.1.1) TXT queries per selector, 5 s timeout each. Returns a CheckResult: {status:"ok", found:[{selector, publicKey, raw},...], notFound:[...]} or {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
selectorsNoDKIM selector names to probe, e.g. ["google", "s1"]. Omit to probe the built-in common-selectors set: default, google, k1, selector1, selector2, mxvault.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the default selector list and the return type (CheckResult), but does not mention potential side effects, rate limits, authentication needs, or error handling. It provides basic behavioral context but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by essential usage details, all in two concise sentences. Every sentence earns its place with no wasted words, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description adequately covers the tool's purpose and basic usage. However, it lacks details on the CheckResult structure, error cases, or operational constraints, leaving room for improvement in completeness for a probing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description adds marginal value by clarifying the default selector behavior when 'selectors' is omitted, but does not provide additional semantic details beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Probe DKIM selectors') and resource ('for a domain'), distinguishing it from sibling tools like dossier_dmarc or dossier_spf. It precisely defines the tool's function without being tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use custom selectors versus default behavior, but does not explicitly state when to use this tool versus alternatives like dossier_dns or other dossier_* tools. It offers some guidance but lacks sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_dmarcBInspect

Core dossier check: Retrieve and parse a domain's DMARC policy from its _dmarc. TXT record, returning all tags. Use to audit email authentication policy, verify the p (policy) and rua (reporting) settings, or confirm alignment mode; pair with dossier_spf and dossier_dkim for complete email-auth coverage. Queries _dmarc. via Cloudflare DoH (1.1.1.1), 5 s timeout; parses each tag=value pair. Returns a CheckResult: on success, {status:"ok", raw, tags:{p, rua, ruf, adkim, aspf,...}}; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the output format ('CheckResult discriminated union'), which adds some context, but lacks details on error handling, rate limits, authentication needs, or what 'parsed into tags' entails. For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes key details like the source (_dmarc.<domain>) and output format. There is no wasted verbiage, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (parsing DMARC records), lack of annotations, and no output schema, the description is moderately complete. It covers the purpose and output format but misses behavioral details like error cases or performance characteristics. For a tool with no structured metadata, it's adequate but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'domain' parameter documented as 'Public FQDN.' The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the DMARC record for a domain from _dmarc.<domain>, parsed into tags, as a CheckResult discriminated union.' It specifies the verb ('Return'), resource ('DMARC record'), and scope ('parsed into tags'), distinguishing it from generic DNS tools. However, it doesn't explicitly differentiate from sibling tools like dossier_dns or dossier_spf, which reduces clarity slightly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like dossier_dns (which might handle general DNS queries) or dossier_spf (which deals with SPF records), nor does it specify prerequisites or exclusions. Usage is implied by the purpose but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_dnsAInspect

Core dossier check: Fetch a domain's full DNS profile — A, AAAA, NS, SOA, CAA, and TXT records — all in parallel. Use as the first step of a domain audit or when you need a comprehensive DNS snapshot in one call; prefer dns_lookup for a single record type, or dossier_full for all 10 dossier checks at once. Fires six Cloudflare DoH (1.1.1.1) queries concurrently, each with a 5 s timeout. Returns a CheckResult discriminated union: on success, {status:"ok", records:{a, aaaa, ns, soa, caa, txt}}; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds value by specifying parallel resolution and the return type (CheckResult discriminated union), which helps understand performance and output structure. However, it lacks details on error handling, rate limits, authentication needs, or side effects, leaving gaps in behavioral context for a tool that performs network operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of two sentences that efficiently convey the tool's action, scope, and output. Every sentence adds critical information without redundancy, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (DNS resolution with multiple record types) and lack of annotations and output schema, the description is moderately complete. It covers the core functionality and output type but misses details like error cases, performance expectations, or how the CheckResult is structured. For a network tool with no structured output documentation, more context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'domain' parameter. The description adds no additional parameter semantics beyond what the schema provides, but with only one parameter and high coverage, the baseline is 3. The score is elevated to 4 because the description implicitly reinforces the parameter's purpose by mentioning 'domain dossier' and DNS resolution, though it doesn't explicitly detail parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Run', 'resolves') and resources ('DNS section of the domain dossier', 'A, AAAA, NS, SOA, CAA, TXT records'). It distinguishes from sibling 'dns_lookup' by specifying parallel resolution of multiple record types and returning a CheckResult discriminated union, making the scope and output format explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying what the tool resolves (multiple DNS record types in parallel) and the return type, but does not explicitly state when to use this vs. alternatives like 'dns_lookup' or other DNS-related tools. No guidance on prerequisites, exclusions, or specific scenarios is provided, leaving usage decisions to inference.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_fullAInspect

Aggregate dossier check: Run all 10 Domain Dossier checks — dns, mx, spf, dmarc, dkim, tls, redirects, headers, cors, web-surface — in parallel and return all results in a single response. Use when you need a comprehensive domain health snapshot in one call; counts as ONE paywall call regardless of how many checks run. For a single focused check, prefer the individual dossier_* tools to minimise latency. Fires all 10 checks concurrently via Cloudflare DoH or direct HTTPS, 5 s per-check timeout. Returns a JSON object keyed by check id (dns, mx, etc.), each value a CheckResult discriminated union ({status:"ok",...} or {status:"error", reason}).

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes parallel execution and single-call cost disclosure, which is helpful beyond nonexistent annotations. However, lacks details on potential side effects, error handling, or timeouts for the combined checks. No annotations exist to supplement, so description carries burden but covers only the key behavioral trait.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states action and lists checks, second describes return format and cost. Highly efficient with no wasted words. Information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (one input param, no nested objects, no output schema), description adequately covers purpose, behavior, and return. Missing details like error handling or timeout behavior, but for a straightforward parallel check tool, it's sufficient. No output schema so return format is described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for the single 'domain' parameter with clear description 'Public FQDN.' Description adds context that it runs checks for a domain, aligning with the schema. Since coverage is high, baseline is 3; the description's mention of 'domain' reinforces usage, but no additional parameter details beyond schema. However, this is acceptable as schema is clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it runs all 10 dossier checks in parallel for a domain, listing each check by name, and specifies the return format (JSON object keyed by check id). This effectively differentiates it from sibling tools like dossier_dns or dossier_mx, which perform individual checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly guides when to use: when all checks are needed in one call (parallel execution) and counts as one MCP call. However, no explicit when-not-to-use or alternatives are mentioned, e.g., could suggest using individual checks for specific needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_headersCInspect

Core dossier check: Fetch https:/// and return all HTTP response headers, with an audit highlighting missing or misconfigured security headers. Use to review CSP, HSTS, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, and Permissions-Policy; for redirect tracing use dossier_redirects instead. Single GET via fetch, 5 s timeout, captures raw response headers before any redirect is followed. Returns a CheckResult: on success, {status:"ok", headers:{...}, securityAudit:[{header, present, value},...]}; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions fetching and returning headers but omits critical details: whether it follows redirects, handles timeouts, requires specific permissions, or has rate limits. The term 'CheckResult' is undefined, leaving the output format ambiguous. This is inadequate for a network tool with potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and output. Every word contributes directly to the tool's purpose, with no redundant or verbose elements. It effectively communicates the essentials in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a network fetch tool with no annotations and no output schema, the description is incomplete. It fails to explain behavioral traits like error handling, security implications, or the meaning of 'CheckResult'. For a tool that interacts with external systems, more context is needed to ensure safe and correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'domain' documented as 'Public FQDN'. The description adds minimal value by embedding the parameter in the URL template ('https://<domain>/'), but doesn't explain format constraints (e.g., no protocol prefix) or provide examples. Since the schema covers the parameter well, a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch') and resource ('https://<domain>/'), specifying it returns 'response headers as a CheckResult'. It distinguishes from siblings like dossier_dns or dossier_tls by focusing on HTTP headers rather than DNS or TLS data. However, it doesn't explicitly contrast with dossier_web_surface or dossier_redirects, which might have overlapping scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't clarify if this is for initial reconnaissance, security checks, or debugging, nor does it mention when to choose dossier_headers over dossier_web_surface or dossier_redirects. The description lacks context about use cases or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_mxBInspect

Core dossier check: Look up a domain's MX (mail exchanger) records and return them sorted ascending by priority. Use when verifying inbound-mail routing or as a precursor to SPF or DMARC checks; prefer dns_lookup with type=MX if you only need the raw DNS answer without the ranked view. Queries Cloudflare DoH (1.1.1.1), follows CNAME aliases, 5 s timeout. Returns a CheckResult discriminated union: on success, {status:"ok", records:[{exchange, priority},...]} sorted by priority; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states the return format without disclosing behavioral traits like error handling, network dependencies, rate limits, or authentication needs. It mentions sorting by priority, which adds some context, but overall lacks critical operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and output format with zero wasted words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate but incomplete. It covers the purpose and output format but lacks usage guidelines and behavioral transparency, leaving gaps for an AI agent to understand when and how to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'domain', so the schema already documents it fully as a 'Public FQDN.' The description does not add any meaning beyond this, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Return'), resource ('MX records for a domain'), and output format ('sorted by priority, as a CheckResult discriminated union'), distinguishing it from siblings like 'dns_lookup' or 'dossier_dns' by focusing specifically on MX records.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'dns_lookup' or 'dossier_dns', which might offer broader DNS queries. The description implies usage for MX records but lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_redirectsAInspect

Core dossier check: Trace the full HTTP redirect chain starting from https:///, recording each hop's status code and destination URL. Use to debug redirect loops, verify HTTP→HTTPS upgrades, or audit link shorteners; stops at 10 hops to prevent infinite loops. Follows Location headers with fetch (no auto-redirect), 5 s per hop. Returns a CheckResult: on success, {status:"ok", hops:[{url, statusCode, redirectsTo},...], final}; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the 10-hop limit and output format (CheckResult), which are useful behavioral traits. However, it doesn't mention rate limits, timeout behavior, error handling, or whether this makes external HTTP requests (though implied).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste - first sentence describes the action and constraints, second specifies the return format. Perfectly front-loaded with all essential information in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description provides good coverage: purpose, constraints (10 hops), and return format. It could be more complete by mentioning that this performs external HTTP requests or describing CheckResult structure, but given the tool's relative simplicity, it's mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (domain parameter fully documented as 'Public FQDN'), so the baseline is 3. The description adds no additional parameter semantics beyond what's in the schema - it doesn't clarify domain format requirements or provide examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Trace the HTTP redirect chain'), resource ('starting at https://<domain>/'), and scope ('up to 10 hops'), with a distinct output format ('Returns the hop list as a CheckResult'). It differentiates from siblings like dossier_headers or dossier_tls by focusing specifically on redirect chain tracing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing HTTP redirects from a given domain, but provides no explicit guidance on when to use this tool versus alternatives like dossier_headers (which might show redirect headers) or general web analysis tools. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_spfCInspect

Core dossier check: Retrieve and parse a domain's SPF record, decomposing it into mechanisms and qualifiers. Use to verify email sender policy, debug delivery failures, or check the 10-lookup limit; pair with dossier_dmarc for full email-auth coverage, or use dns_lookup with type=TXT for the raw record only. Fetches TXT records via Cloudflare DoH (1.1.1.1), 5 s timeout, locates the v=spf1 record and parses all mechanisms. Returns a CheckResult: on success, {status:"ok", raw, mechanisms:[{type, value, qualifier},...], lookupCount}; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the output format ('CheckResult discriminated union'), which adds some context, but fails to describe critical behaviors such as error handling, rate limits, authentication needs, or whether it performs network calls. For a tool that likely queries external DNS servers, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality. It avoids unnecessary words and directly states the action, resource, and output format. However, it could be slightly more structured by separating usage context from technical details, which prevents a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving DNS queries and parsing), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the 'CheckResult discriminated union' output, error scenarios, or network behavior. For a tool that returns parsed data, more context is needed to guide the agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'domain' parameter documented as 'Public FQDN.' The description adds no additional parameter semantics beyond what the schema provides, such as format examples or validation rules. Given the high schema coverage, a baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the SPF record for a domain, parsed into mechanisms, as a CheckResult discriminated union.' It specifies the verb ('Return'), resource ('SPF record'), and transformation ('parsed into mechanisms'), distinguishing it from generic DNS tools. However, it doesn't explicitly differentiate from sibling tools like 'dossier_dns' or 'dossier_dmarc', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'dossier_dns' (which might return raw DNS records) or 'dossier_dmarc' (for email authentication), nor does it specify prerequisites or contexts for SPF analysis. This leaves the agent with minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_tlsAInspect

Core dossier check: Fetch and inspect the TLS certificate presented by a domain on port 443, returning chain details and validity period. Use to verify certificate expiry, issuer, Subject Alternative Names, or detect mismatched or self-signed certs; not a full cipher-suite scanner. Performs a TLS handshake from the server edge, 5 s timeout; extracts the leaf certificate. Returns a CheckResult: on success, {status:"ok", subject, issuer, validFrom, validTo, daysRemaining, sans, fingerprint}; on failure, {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the action (fetching TLS certificate) and output format (CheckResult with specific fields), but lacks details on error handling, network timeouts, or rate limits, which are important for a network tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, action, and output without any wasted words, making it easy to understand at a glance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (network operation with one parameter) and no annotations or output schema, the description is mostly complete, covering what it does and what it returns. However, it lacks details on potential errors or behavioral nuances, which would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'domain' parameter as a 'Public FQDN.' The description adds no additional parameter semantics beyond what the schema provides, such as examples or constraints, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetch', 'return') and resources ('TLS peer certificate for a domain on port 443'), distinguishing it from siblings like dossier_dns or dossier_headers by focusing on TLS certificate details rather than DNS records or HTTP headers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for TLS certificate analysis on port 443, providing clear context. However, it does not explicitly state when to use this tool versus alternatives like dossier_web_surface or when not to use it, such as for non-TLS domains or other ports.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dossier_web_surfaceAInspect

Core dossier check: Snapshot a domain's public web surface: robots.txt, sitemap.xml, and the home-page metadata (title, description, OpenGraph, Twitter cards). Use for SEO audits, content discovery, or verifying metadata before sharing; for HTTP headers use dossier_headers, for redirect behavior use dossier_redirects. Fetches /, /robots.txt, and /sitemap.xml concurrently via HTTPS, 5 s each; parses with a lightweight HTML parser. Returns a composite CheckResult: {status:"ok", meta:{title, description, og, twitter}, robots, sitemapPresent} or {status:"error", reason}.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesPublic FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the tool's behavior by stating it summarizes specific web resources and returns a 'composite CheckResult', which hints at an aggregated output format. However, it lacks details on potential side effects (e.g., network requests, rate limits), error handling, or authentication needs, leaving gaps in behavioral context for a tool that likely performs external queries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, using a single sentence that efficiently conveys the tool's purpose, scope, and output. Every element—the action, target, specific components analyzed, and return type—earns its place without redundancy or unnecessary elaboration, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (involving multiple web resource checks) and lack of annotations or output schema, the description is partially complete. It specifies what the tool does and the return type ('composite CheckResult'), but does not detail the structure of the result, potential errors, or operational constraints like timeouts or data limits, which could be important for an agent using this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'domain' documented as 'Public FQDN.' The description adds minimal value beyond this, as it only mentions 'domain' generically without elaborating on format constraints or examples. Since the schema already fully describes the parameter, the baseline score of 3 is appropriate, reflecting adequate but not enhanced parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Summarise') and target resource ('domain's public web surface'), listing the exact components analyzed: robots.txt, sitemap.xml, and home-page metadata including title, description, OpenGraph, and Twitter tags. It distinguishes itself from sibling tools like dossier_dns or dossier_headers by focusing on web surface elements rather than DNS, headers, or other technical checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'domain's public web surface' and listing the components analyzed, which suggests it should be used for gathering public-facing web metadata. However, it does not explicitly state when to use this tool versus alternatives like dossier_headers or other dossier_* tools, nor does it provide exclusions or prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ip_lookupAInspect

Context lookup: Resolve an IPv4 or IPv6 address to its geolocation, ASN, org name, and city/country. Use when you need network or location context for a raw IP address; prefer dns_lookup or dossier_dns for hostname resolution. Queries ipinfo.io with a server-side token — the token is never exposed to callers. Returns a JSON object with fields ip, city, region, country, org, loc, and timezone. On failure, returns an error string describing what went wrong.

ParametersJSON Schema
NameRequiredDescriptionDefault
ipYesIPv4 or IPv6 address to look up, e.g. 1.2.3.4 or 2001:db8::1. Hostnames are not accepted.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the external service provider ('ipinfo.io'), which is useful context for rate limits or reliability. However, it doesn't mention authentication needs, rate limits, error handling, or whether this is a read-only operation, leaving behavioral gaps for a tool that makes external API calls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys purpose, input, output, and service provider without any wasted words. It's appropriately sized and front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description covers the basic purpose and return data types adequately. However, for a tool that interacts with an external API (ipinfo.io), it lacks details on error responses, rate limits, or output structure, which would help an agent use it more effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents the single parameter 'ip' as an IPv4 or IPv6 address. The description adds value by reinforcing the input types and linking it to the lookup purpose, but doesn't provide additional syntax or format details beyond what the schema states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Look up') and resource ('an IP address'), specifies the input types ('v4 or v6'), and lists the exact return data ('geolocation, ASN, and ISP'). It also distinguishes from siblings by mentioning the specific service provider ('via ipinfo.io'), which none of the other tools reference.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying the input types and return data, helping an agent understand when this tool is appropriate. However, it doesn't explicitly state when not to use it or name alternatives among siblings (e.g., dns_lookup for domain resolution), missing full comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

user_agent_parseAInspect

Context lookup: Parse a User-Agent header string into structured browser, OS, device type, and rendering-engine components. Use to identify client capabilities from a raw UA string, e.g. when analysing server logs or request headers; does not perform any network lookups — entirely local parsing. Runs synchronously using the ua-parser-js library with no external calls. Returns a JSON object with browser.name, browser.version, os.name, os.version, device.type, device.vendor, and engine.name fields; unknown fields are empty strings.

ParametersJSON Schema
NameRequiredDescriptionDefault
uaYesFull User-Agent header value as sent by the browser or HTTP client, e.g. "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36".
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions parsing into components but does not disclose behavioral traits such as error handling (e.g., for invalid input), performance characteristics, or output format details. This leaves gaps in understanding how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key details without any wasted words. It is appropriately sized for a simple tool with one parameter, making it easy to understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and output components but lacks details on behavioral aspects and error handling, which are important for a parsing tool. This results in a minimal viable description with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'ua' clearly documented as 'User-Agent header value'. The description adds no additional meaning beyond this, as it does not elaborate on parameter usage or constraints. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('parse') and the resource ('User-Agent string'), with explicit details about the output components ('browser, OS, device, and engine components'). It distinguishes this tool from siblings like base64_decode or url_encode by focusing on user-agent analysis rather than data transformation or encoding/decoding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a User-Agent string needs parsing, but it does not provide explicit guidance on when to use this tool versus alternatives or any exclusions. Given the sibling tools are unrelated (e.g., encoding/decoding, DNS lookup), the context is clear but lacks detailed comparative advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.