drwho.me
Server Details
Remote MCP server: 10 developer utilities — base64, JWT decode, DNS lookup, UUID, URL codec, JSON format, User-Agent, IP lookup.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 14 of 14 tools scored. Lowest: 2.9/5.
Most tools are clearly distinct with 'dossier_' prefix for domain checks and separate lookup/parse tools. Some overlap exists (e.g., dns_lookup vs dossier_dns), but descriptions sufficiently differentiate them.
Tool names follow a mostly consistent pattern: 'dossier_<check>' for domain checks, and 'verb_noun' for the lookups and parser. Minor inconsistency between the two conventions, but each group is internally consistent.
14 tools is appropriate for a domain health and network utility server. The count covers essential checks without being overwhelming, and each tool serves a specific purpose.
The tool surface covers DNS, email authentication, TLS, headers, redirects, CORS, and web surface, which is comprehensive for domain analysis. Minor gaps like missing HTTP status codes are compensated by related tools.
Available Tools
14 toolsdns_lookupBInspect
Context lookup: Resolve a single DNS record type (A, AAAA, MX, TXT, NS, CNAME, SOA, CAA, or SRV) and return the raw answers. Use for quick, targeted lookups of one record type; prefer dossier_dns for a full multi-type DNS audit in parallel, or dossier_full for a complete domain health check. Queries Cloudflare DoH (1.1.1.1/dns-query) over HTTPS, follows CNAME chains, 5 s timeout. Returns a JSON array of answer objects with name, type, and data fields. On error, returns a string describing the DNS failure.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Domain name or hostname to resolve, e.g. example.com or mail.example.com. FQDN preferred; relative labels are accepted. | |
| type | Yes | DNS record type to query. Common choices: A (IPv4), AAAA (IPv6), MX (mail), TXT (SPF/DKIM/verification), NS (nameservers), CNAME (alias). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the resolution method (Cloudflare DoH), it doesn't describe rate limits, error conditions, authentication requirements, response format, or whether this is a read-only operation. For a network tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that packs essential information: the action, resource, record types, and method. Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a DNS lookup tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., IP addresses, mail server priorities, text records), error handling, or operational constraints. The agent would need to guess about the response format and potential limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the schema (domain name and record type). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Resolve a DNS record') and resource (DNS records via Cloudflare DoH), listing the exact record types supported. It distinguishes itself from sibling tools like ip_lookup by focusing on DNS resolution rather than IP geolocation or other utility functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying DNS record types and Cloudflare DoH, but doesn't explicitly state when to use this tool versus alternatives. No guidance is provided on prerequisites, limitations, or comparisons with other DNS tools that might exist elsewhere.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_corsCInspect
Core dossier check: Send a CORS preflight OPTIONS request to https:/// and return the access-control-* response headers. Use to verify CORS policy for a specific origin-method pair, or to check whether a domain allows cross-origin requests; provide origin and method to simulate a precise preflight, or omit to use defaults (origin: https://drwho.me, method: GET). Single OPTIONS request via fetch, 5 s timeout. Returns a CheckResult: on success, {status:"ok", headers:{access-control-allow-origin,...}}; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. | |
| method | No | Access-Control-Request-Method header value, e.g. POST or PUT. Defaults to GET if omitted. | |
| origin | No | Origin header value to include in the preflight, e.g. https://app.example.com. Defaults to https://drwho.me if omitted. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the action (sending a CORS preflight OPTIONS request) and output (returning access-control-* headers), but lacks details on error handling, network timeouts, rate limits, authentication needs, or what happens if the domain is unreachable. For a network tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—just two sentences that directly state the tool's function and note optional inputs. Every word earns its place with no redundancy or fluff. It's front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (network request to external domains), lack of annotations, and no output schema, the description is incomplete. It doesn't cover error cases, response formats beyond headers, security implications, or how it differs from sibling tools. For a tool that interacts with external systems, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters (domain, method, origin) with descriptions and defaults. The description adds minimal value by noting that origin and method are optional, but doesn't provide additional context beyond what's in the schema. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Send a CORS preflight OPTIONS to https://<domain>/ and return the access-control-* headers.' It specifies the verb (send), resource (CORS preflight OPTIONS request), and output (access-control-* headers). However, it doesn't explicitly differentiate from sibling tools like dossier_headers or dossier_web_surface, which might also involve HTTP headers or web analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions optional inputs but doesn't explain scenarios where this tool is preferred over other dossier_* tools or general HTTP tools. There's no mention of prerequisites, typical use cases, or comparisons with siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_dkimAInspect
Core dossier check: Probe a domain's DKIM public keys by querying ._domainkey. for each selector. Use to verify signing configuration or discover active selectors; supply selectors when you know the ESP's selector, or omit to probe six common selectors (default, google, k1, selector1, selector2, mxvault). Issues parallel Cloudflare DoH (1.1.1.1) TXT queries per selector, 5 s timeout each. Returns a CheckResult: {status:"ok", found:[{selector, publicKey, raw},...], notFound:[...]} or {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. | |
| selectors | No | DKIM selector names to probe, e.g. ["google", "s1"]. Omit to probe the built-in common-selectors set: default, google, k1, selector1, selector2, mxvault. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool probes DKIM selectors and returns a CheckResult, but lacks details on behavioral traits such as rate limits, error handling, or what 'probe' entails (e.g., network calls, timeouts). It adds some context but leaves gaps in operational transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and efficiently structured in two sentences: the first states the purpose and defaults, the second explains custom usage and return value. Every sentence adds essential information without waste, making it highly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is moderately complete. It covers the tool's purpose, parameter usage, and return type (CheckResult), but lacks details on output structure, error cases, or deeper behavioral context. For a probing tool with network implications, more completeness would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds marginal value by clarifying the default behavior for selectors ('Defaults to common selectors...') and linking it to the optional parameter, but does not provide additional semantics beyond what the schema specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Probe') and resource ('DKIM selectors for a domain'), specifying what the tool does. It distinguishes from siblings like dossier_dmarc or dossier_spf by focusing specifically on DKIM selector probing, making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use custom selectors vs. defaults ('Supply `selectors` to probe a custom list'), but does not explicitly mention when to use this tool over alternatives like dossier_dns or other dossier tools. It implies usage for DKIM-related checks without exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_dmarcCInspect
Core dossier check: Retrieve and parse a domain's DMARC policy from its _dmarc. TXT record, returning all tags. Use to audit email authentication policy, verify the p (policy) and rua (reporting) settings, or confirm alignment mode; pair with dossier_spf and dossier_dkim for complete email-auth coverage. Queries _dmarc. via Cloudflare DoH (1.1.1.1), 5 s timeout; parses each tag=value pair. Returns a CheckResult: on success, {status:"ok", raw, tags:{p, rua, ruf, adkim, aspf,...}}; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the output format ('CheckResult discriminated union') but doesn't explain what this means, potential errors (e.g., if the domain lacks a DMARC record), network behavior, or performance characteristics. For a tool that performs DNS lookups, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently conveys the core functionality. It front-loads the purpose but could be more structured for clarity, as it combines multiple concepts (returning, parsing, output type) without separation. However, it avoids redundancy and wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (DNS-based lookup with parsing), lack of annotations, and no output schema, the description is partially complete. It covers the basic operation but omits details on error handling, return values, and practical usage context. It's adequate for a simple tool but leaves the agent to guess about behavioral aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'domain' documented as 'Public FQDN.' The description adds no additional parameter semantics beyond what the schema provides, such as format examples or validation rules. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return the DMARC record for a domain from _dmarc.<domain>, parsed into tags, as a CheckResult discriminated union.' It specifies the verb ('Return'), resource ('DMARC record'), and transformation ('parsed into tags'), but doesn't explicitly differentiate from sibling tools like dossier_dkim or dossier_spf, which likely perform similar DNS-based checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like dossier_dns (which might handle general DNS queries) or specify use cases like security auditing versus general DNS lookup. The agent must infer usage from the tool name and context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_dnsAInspect
Core dossier check: Fetch a domain's full DNS profile — A, AAAA, NS, SOA, CAA, and TXT records — all in parallel. Use as the first step of a domain audit or when you need a comprehensive DNS snapshot in one call; prefer dns_lookup for a single record type, or dossier_full for all 10 dossier checks at once. Fires six Cloudflare DoH (1.1.1.1) queries concurrently, each with a 5 s timeout. Returns a CheckResult discriminated union: on success, {status:"ok", records:{a, aaaa, ns, soa, caa, txt}}; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and discloses key behavioral traits: parallel resolution of multiple DNS record types and return of a CheckResult discriminated union. However, it lacks details on error handling, rate limits, authentication needs, or what the CheckResult structure entails, leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and highly concise, consisting of a single sentence that efficiently conveys the tool's core functionality without unnecessary words. Every element (action, scope, output) earns its place, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (DNS resolution with multiple record types), no annotations, and no output schema, the description is partially complete. It covers the main action and output type but lacks details on the CheckResult structure, error cases, or performance implications, which could hinder agent usage in edge scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single 'domain' parameter. The description adds no additional parameter semantics beyond what the schema provides, such as examples of valid domains or clarification on 'public FQDN'. Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Run', 'resolves') and resources ('DNS section of the domain dossier', 'A, AAAA, NS, SOA, CAA, TXT records'). It distinguishes from siblings like 'dns_lookup' by specifying parallel resolution of multiple record types and returning a CheckResult discriminated union.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'dns_lookup' is provided. The description implies usage for comprehensive DNS analysis but doesn't specify scenarios, prerequisites, or exclusions, leaving the agent to infer context without clear direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_fullAInspect
Aggregate dossier check: Run all 10 Domain Dossier checks — dns, mx, spf, dmarc, dkim, tls, redirects, headers, cors, web-surface — in parallel and return all results in a single response. Use when you need a comprehensive domain health snapshot in one call; counts as ONE paywall call regardless of how many checks run. For a single focused check, prefer the individual dossier_* tools to minimise latency. Fires all 10 checks concurrently via Cloudflare DoH or direct HTTPS, 5 s per-check timeout. Returns a JSON object keyed by check id (dns, mx, etc.), each value a CheckResult discriminated union ({status:"ok",...} or {status:"error", reason}).
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that it runs all 10 checks in parallel, returns one JSON object keyed by check id, and emphasizes that it counts as one MCP call. With no annotations provided, this covers the safety profile (non-destructive reads) and performance characteristics well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each providing distinct value: what it does, the return format, and a key behavioral note. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description explicitly specifies the return format ('JSON object keyed by check id, each value a CheckResult'), making it clear. It fully covers purpose, behavior, and efficiency for a single-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already has 100% description coverage for the single 'domain' parameter with a clear description. The description adds no extra parameter info, but baseline is 3 due to high schema coverage. The context of 'Public FQDN' is consistent and sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'run', the resource 'all 10 dossier checks', and the target 'a domain'. It clearly lists all 10 check types, distinguishing itself from sibling tools like dossier_dns or dossier_cors that run single checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states that the tool runs checks in parallel and counts as one MCP call, implying efficiency for batch checks. However, it does not explicitly tell the agent when to use this versus the individual dossier tools, though the context implies it for comprehensive checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_headersCInspect
Core dossier check: Fetch https:/// and return all HTTP response headers, with an audit highlighting missing or misconfigured security headers. Use to review CSP, HSTS, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, and Permissions-Policy; for redirect tracing use dossier_redirects instead. Single GET via fetch, 5 s timeout, captures raw response headers before any redirect is followed. Returns a CheckResult: on success, {status:"ok", headers:{...}, securityAudit:[{header, present, value},...]}; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions fetching and returning headers as a 'CheckResult', but doesn't specify details like error handling, rate limits, authentication needs, or what 'CheckResult' entails (e.g., format or content). For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's action and output. It's front-loaded with the core purpose and avoids unnecessary details, making it easy to parse. However, it could be slightly improved by structuring key points more explicitly, but overall, it's concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a web fetch operation), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the 'CheckResult' output format, potential errors, or behavioral traits like timeouts or security considerations. For a tool that interacts with external domains, more context is needed to ensure proper usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'domain' clearly documented as a 'Public FQDN.' The description adds minimal value beyond the schema by implying the domain is used in a URL fetch, but it doesn't provide additional context like examples or constraints. Given the high schema coverage, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Fetch') and resource ('response headers'), and it specifies the target ('https://<domain>/'). However, it doesn't explicitly differentiate from sibling tools like 'dossier_dns' or 'dossier_tls', which might also involve domain-related operations, leaving some ambiguity about its unique role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or comparisons with sibling tools such as 'dossier_cors' or 'dossier_web_surface', which could be relevant for similar web-related checks. This lack of context makes it harder for an agent to select the right tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_mxBInspect
Core dossier check: Look up a domain's MX (mail exchanger) records and return them sorted ascending by priority. Use when verifying inbound-mail routing or as a precursor to SPF or DMARC checks; prefer dns_lookup with type=MX if you only need the raw DNS answer without the ranked view. Queries Cloudflare DoH (1.1.1.1), follows CNAME aliases, 5 s timeout. Returns a CheckResult discriminated union: on success, {status:"ok", records:[{exchange, priority},...]} sorted by priority; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. It mentions the output format ('CheckResult discriminated union') and sorting ('sorted by priority'), adding useful context beyond basic functionality. However, it omits details like error handling, rate limits, or authentication needs, which are important for a network tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. It could be slightly more structured by separating behavioral details, but it avoids waste and is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is moderately complete: it covers the purpose and output format. However, for a network query tool, it lacks details on error cases, performance, or integration context, leaving gaps that could hinder agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'domain'. The description adds no additional meaning about the parameter, such as format examples or constraints beyond 'Public FQDN'. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and resource ('MX records for a domain'), distinguishing it from sibling tools like 'dns_lookup' or 'dossier_dns' by specifying MX records. However, it doesn't explicitly contrast with these siblings, missing full differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'dns_lookup' or 'dossier_dns'. The description implies usage for MX records but lacks explicit context, prerequisites, or exclusions, leaving the agent to infer from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_redirectsAInspect
Core dossier check: Trace the full HTTP redirect chain starting from https:///, recording each hop's status code and destination URL. Use to debug redirect loops, verify HTTP→HTTPS upgrades, or audit link shorteners; stops at 10 hops to prevent infinite loops. Follows Location headers with fetch (no auto-redirect), 5 s per hop. Returns a CheckResult: on success, {status:"ok", hops:[{url, statusCode, redirectsTo},...], final}; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the basic operation and output format but lacks important behavioral details: whether this makes external HTTP requests, what happens with invalid domains, timeout behavior, authentication requirements, rate limits, or what 'CheckResult' contains structurally. For a tool that likely makes network calls, this is insufficient transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded with all essential information in a single sentence. Every element earns its place: the action, target, constraints, and output format. There's zero wasted text or redundancy, making it highly efficient for agent comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (network operation with 1 parameter) and no annotations or output schema, the description provides basic completeness but has significant gaps. It covers purpose and parameters adequately but lacks behavioral details about network calls, error handling, and output structure. For a tool that interacts with external systems, more context about reliability and response format would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the parameter 'domain' documented as 'Public FQDN.' The description adds context by showing how the domain is used ('starting at https://<domain>/') and that it's the starting point for redirect tracing. However, it doesn't provide additional semantic details beyond what the schema already covers, maintaining the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Trace the HTTP redirect chain') and resource ('starting at https://<domain>/'), with explicit scope limitations ('up to 10 hops') and output format ('Returns the hop list as a CheckResult'). It distinguishes from siblings like dossier_headers or dossier_web_surface by focusing specifically on redirect chain analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'starting at https://<domain>/' and 'up to 10 hops', suggesting this is for analyzing web redirects with a hop limit. However, it doesn't explicitly state when to use this tool versus alternatives like dossier_headers (which might show redirect headers) or when not to use it (e.g., for non-HTTP protocols). No explicit alternatives or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_spfBInspect
Core dossier check: Retrieve and parse a domain's SPF record, decomposing it into mechanisms and qualifiers. Use to verify email sender policy, debug delivery failures, or check the 10-lookup limit; pair with dossier_dmarc for full email-auth coverage, or use dns_lookup with type=TXT for the raw record only. Fetches TXT records via Cloudflare DoH (1.1.1.1), 5 s timeout, locates the v=spf1 record and parses all mechanisms. Returns a CheckResult: on success, {status:"ok", raw, mechanisms:[{type, value, qualifier},...], lookupCount}; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions parsing into mechanisms and returning a CheckResult discriminated union, which adds some behavioral context (e.g., structured output). However, it lacks details on error handling, rate limits, authentication needs, or what 'parsed' entails, leaving gaps for a tool that likely involves external DNS queries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part earns its place by specifying the action, resource, parsing detail, and output type, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter with full schema coverage and no output schema, the description provides the purpose and output type but lacks context on when to use it, behavioral traits, or error handling. For a simple lookup tool, this is minimally adequate but has clear gaps in usage and transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'domain' documented as 'Public FQDN.' The description doesn't add meaning beyond this, such as examples or constraints, so it meets the baseline of 3 where the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and resource ('SPF record for a domain'), specifying it's parsed into mechanisms and returned as a CheckResult discriminated union. However, it doesn't explicitly differentiate from sibling tools like dossier_dns or dossier_dmarc, which might also involve DNS record analysis, though the focus on SPF is specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention if this is for security analysis, email configuration checks, or how it differs from general DNS lookup tools in the sibling list, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_tlsAInspect
Core dossier check: Fetch and inspect the TLS certificate presented by a domain on port 443, returning chain details and validity period. Use to verify certificate expiry, issuer, Subject Alternative Names, or detect mismatched or self-signed certs; not a full cipher-suite scanner. Performs a TLS handshake from the server edge, 5 s timeout; extracts the leaf certificate. Returns a CheckResult: on success, {status:"ok", subject, issuer, validFrom, validTo, daysRemaining, sans, fingerprint}; on failure, {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the operation (fetching TLS certificates) and output format (CheckResult with specific fields), but doesn't mention potential errors (e.g., if domain doesn't support TLS on port 443), performance characteristics, or network dependencies. The description adds basic context but lacks depth about failure modes or operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates the tool's purpose, scope (port 443), and output format. Every element serves a purpose: the verb 'Fetch,' the target 'TLS peer certificate,' the constraint 'on port 443,' and the detailed output specification. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (network operation with specific output format), no annotations, and no output schema, the description provides adequate but minimal coverage. It specifies what fields are returned but doesn't describe the CheckResult structure, error handling, or network timeout behavior. For a security-focused tool making external connections, more operational context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the single 'domain' parameter well-documented as 'Public FQDN.' The description adds value by specifying this is for 'port 443' TLS checking, providing important context about the parameter's intended use that goes beyond the schema's technical definition. For a single-parameter tool with good schema coverage, this earns above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch the TLS peer certificate for a domain on port 443') and the resources involved ('subject, issuer, validity, SANs, and fingerprint as a CheckResult'). It distinguishes from siblings like 'dossier_dns' or 'dossier_headers' by focusing specifically on TLS certificate retrieval rather than DNS records or HTTP headers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'domain on port 443', suggesting this is for HTTPS TLS certificate checking. However, it doesn't explicitly state when to use this tool versus alternatives like 'dossier_dns' for DNS records or 'dossier_headers' for HTTP headers, nor does it mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dossier_web_surfaceAInspect
Core dossier check: Snapshot a domain's public web surface: robots.txt, sitemap.xml, and the home-page metadata (title, description, OpenGraph, Twitter cards). Use for SEO audits, content discovery, or verifying metadata before sharing; for HTTP headers use dossier_headers, for redirect behavior use dossier_redirects. Fetches /, /robots.txt, and /sitemap.xml concurrently via HTTPS, 5 s each; parses with a lightweight HTML parser. Returns a composite CheckResult: {status:"ok", meta:{title, description, og, twitter}, robots, sitemapPresent} or {status:"error", reason}.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Public FQDN, e.g. example.com. Must be resolvable on the public internet; IPs, ports, paths, and protocol prefixes are rejected. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return type ('composite CheckResult'), which adds some context, but lacks details on potential errors, rate limits, authentication needs, or what happens if resources are missing. The description doesn't contradict annotations (none exist), but could provide more behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and output. It front-loads the core action ('Summarise') and lists specific resources without unnecessary elaboration, making every word count.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (analyzing multiple web resources), no annotations, and no output schema, the description does a good job by specifying what it summarizes and the return type. However, it could be more complete by detailing the structure of 'CheckResult' or handling of edge cases (e.g., missing files), leaving some gaps for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'domain' documented as 'Public FQDN.' The description doesn't add any parameter-specific information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without compensating with extra details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Summarise') and the exact resources involved (robots.txt, sitemap.xml, home-page <head> metadata including title, description, OpenGraph, Twitter). It distinguishes itself from sibling tools like dossier_dns or dossier_headers by focusing on web surface analysis rather than DNS or HTTP headers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'public web surface' and listing specific files/metadata to analyze, which suggests it's for domain reconnaissance. However, it doesn't explicitly state when to use this tool versus alternatives like dossier_headers or when not to use it (e.g., for non-public domains).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ip_lookupAInspect
Context lookup: Resolve an IPv4 or IPv6 address to its geolocation, ASN, org name, and city/country. Use when you need network or location context for a raw IP address; prefer dns_lookup or dossier_dns for hostname resolution. Queries ipinfo.io with a server-side token — the token is never exposed to callers. Returns a JSON object with fields ip, city, region, country, org, loc, and timezone. On failure, returns an error string describing what went wrong.
| Name | Required | Description | Default |
|---|---|---|---|
| ip | Yes | IPv4 or IPv6 address to look up, e.g. 1.2.3.4 or 2001:db8::1. Hostnames are not accepted. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the external service used (ipinfo.io) and the types of data returned, but lacks details on rate limits, authentication needs, error handling, or response format. It adequately describes the core behavior but misses operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys purpose, scope, and data source without redundancy. Every element (action, input, output, service) earns its place, making it front-loaded and zero-waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with one parameter (100% schema coverage) and no output schema, the description is reasonably complete—it covers what the tool does and the data source. However, without annotations or output schema, it could better address behavioral aspects like rate limits or response structure to fully compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'ip' fully documented in the schema. The description adds no additional parameter details beyond implying IPv4/v6 support (already in schema). Baseline 3 is appropriate as the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('look up'), resource ('IP address'), and what information is returned ('geolocation, ASN, and ISP'), with explicit mention of the data source ('via ipinfo.io'). It distinguishes itself from siblings like dns_lookup by focusing on IP metadata rather than DNS resolution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for IP address analysis, but provides no explicit guidance on when to use this tool versus alternatives (e.g., dns_lookup for domain resolution) or any prerequisites. The context is clear but lacks comparative or exclusionary statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
user_agent_parseAInspect
Context lookup: Parse a User-Agent header string into structured browser, OS, device type, and rendering-engine components. Use to identify client capabilities from a raw UA string, e.g. when analysing server logs or request headers; does not perform any network lookups — entirely local parsing. Runs synchronously using the ua-parser-js library with no external calls. Returns a JSON object with browser.name, browser.version, os.name, os.version, device.type, device.vendor, and engine.name fields; unknown fields are empty strings.
| Name | Required | Description | Default |
|---|---|---|---|
| ua | Yes | Full User-Agent header value as sent by the browser or HTTP client, e.g. "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool's function (parsing into components) but lacks details on behavioral traits like error handling, performance, or output format. The description doesn't contradict annotations, but offers minimal behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose and output components. It is front-loaded with no wasted words, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is adequate but incomplete. It explains what the tool does but lacks details on output structure, error cases, or usage context, which could help an agent use it correctly without trial and error.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'ua' documented as 'User-Agent header value'. The description adds no additional parameter semantics beyond what the schema provides, such as examples or constraints. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'parse' and the resource 'User-Agent string', specifying the output components (browser, OS, device, engine). It distinguishes from siblings like base64_encode or uuid_generate by focusing on parsing rather than encoding/generation, though it doesn't explicitly differentiate from all siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a User-Agent string needs parsing into components, but provides no explicit guidance on when to use this tool versus alternatives (e.g., if other tools handle similar parsing). It lacks context on prerequisites or exclusions, such as input format requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!