drwho.me
Server Details
Remote MCP server: 10 developer utilities — base64, JWT decode, DNS lookup, UUID, URL codec, JSON format, User-Agent, IP lookup.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored.
Each tool has a clearly distinct purpose with no overlap: base64 encode/decode, DNS/IP lookups, JSON formatting, JWT decoding, URL encode/decode, user-agent parsing, and UUID generation. The descriptions make it easy to differentiate between them, and there are no ambiguous boundaries.
All tool names follow a consistent snake_case pattern with clear verb_noun or noun_verb structures (e.g., base64_decode, dns_lookup, json_format). There are no deviations in naming conventions, making the set predictable and easy to understand.
With 10 tools, the count is well-scoped for a utility server focused on common data transformation and lookup tasks. Each tool serves a specific, useful function, and there are no redundant or trivial additions, fitting the server's purpose effectively.
The tool set covers a broad range of utility operations with no major gaps for its domain, such as encoding/decoding, lookups, and parsing. Minor gaps might include additional data formats (e.g., XML or YAML handling) or more advanced JWT operations, but core workflows are well-covered.
Available Tools
10 toolsbase64_decodeAInspect
Decode a base64 (or base64url) string to UTF-8.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Base64 or base64url string |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the core behavior (decoding to UTF-8) but doesn't mention error handling for invalid inputs, performance characteristics, or any side effects. It provides basic behavioral information but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste - every word contributes essential information. Perfectly front-loaded with the core action and scope, no unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple transformation tool with 100% schema coverage but no annotations or output schema, the description provides adequate functional context but lacks information about return format, error cases, or encoding specifics. It's minimally complete for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter. The description adds marginal value by reinforcing that the input accepts both base64 and base64url formats, but doesn't provide additional syntax or format details beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Decode') and resource ('a base64 (or base64url) string to UTF-8'), distinguishing it from siblings like base64_encode and url_decode. It precisely defines the transformation being performed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying the input type (base64/base64url strings) and output format (UTF-8), but doesn't explicitly state when to use this tool versus alternatives like url_decode or when not to use it. It provides clear functional context without explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
base64_encodeAInspect
Encode a UTF-8 string as standard base64.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | UTF-8 string to encode |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It states the transformation behavior (UTF-8 to base64) but doesn't disclose error handling for non-UTF-8 input, performance characteristics, or output format details. The description adds basic behavioral context but lacks completeness for a mutation operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core purpose and contains no redundant information, making it optimally concise for this simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple transformation tool with one parameter and no output schema, the description provides adequate context about what the tool does. However, it lacks information about return values, error conditions, or encoding standards, which would be helpful given the absence of annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single 'input' parameter. The description adds no additional parameter information beyond what's in the schema, maintaining the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Encode') and resource ('UTF-8 string') with the transformation target ('as standard base64'). It distinguishes from sibling tools like base64_decode and url_encode by specifying the exact encoding operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'UTF-8 string' as input, which suggests when to use this tool (for UTF-8 text encoding). However, it doesn't explicitly state when not to use it or name alternatives like url_encode for different encoding needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dns_lookupBInspect
Resolve a DNS record (A, AAAA, MX, TXT, NS, CNAME) via Cloudflare DoH.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Domain name to resolve | |
| type | Yes | DNS record type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the resolution method (Cloudflare DoH), it doesn't describe rate limits, error conditions, authentication requirements, response format, or whether this is a read-only operation. For a network tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that packs essential information: the action, resource, record types, and method. Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a DNS lookup tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., IP addresses, mail server priorities, text records), error handling, or operational constraints. The agent would need to guess about the response format and potential limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema. The description doesn't add any parameter-specific information beyond what's already in the schema (domain name and record type). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Resolve a DNS record') and resource (DNS records via Cloudflare DoH), listing the exact record types supported. It distinguishes itself from sibling tools like ip_lookup by focusing on DNS resolution rather than IP geolocation or other utility functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying DNS record types and Cloudflare DoH, but doesn't explicitly state when to use this tool versus alternatives. No guidance is provided on prerequisites, limitations, or comparisons with other DNS tools that might exist elsewhere.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ip_lookupAInspect
Look up an IP address (v4 or v6) and return its geolocation, ASN, and ISP via ipinfo.io.
| Name | Required | Description | Default |
|---|---|---|---|
| ip | Yes | IPv4 or IPv6 address to look up |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the external service used (ipinfo.io) and the types of data returned, but lacks details on rate limits, authentication needs, error handling, or response format. It adequately describes the core behavior but misses operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys purpose, scope, and data source without redundancy. Every element (action, input, output, service) earns its place, making it front-loaded and zero-waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with one parameter (100% schema coverage) and no output schema, the description is reasonably complete—it covers what the tool does and the data source. However, without annotations or output schema, it could better address behavioral aspects like rate limits or response structure to fully compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'ip' fully documented in the schema. The description adds no additional parameter details beyond implying IPv4/v6 support (already in schema). Baseline 3 is appropriate as the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('look up'), resource ('IP address'), and what information is returned ('geolocation, ASN, and ISP'), with explicit mention of the data source ('via ipinfo.io'). It distinguishes itself from siblings like dns_lookup by focusing on IP metadata rather than DNS resolution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for IP address analysis, but provides no explicit guidance on when to use this tool versus alternatives (e.g., dns_lookup for domain resolution) or any prerequisites. The context is clear but lacks comparative or exclusionary statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
json_formatAInspect
Format and validate JSON. Returns the pretty-printed string or a parse error.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Raw JSON text | |
| indent | No | Indent width; default 2 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool formats and validates JSON, returning either a pretty-printed string or a parse error, which covers basic behavior. However, it lacks details on error handling specifics, performance characteristics, or any side effects (e.g., rate limits, authentication needs), leaving gaps for a tool with mutation-like validation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, consisting of two concise sentences that directly state the tool's function and output behavior without any wasted words. Every sentence earns its place by providing essential information efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (JSON formatting/validation), no annotations, no output schema, and 100% schema coverage, the description is minimally adequate. It covers the core purpose and output types but lacks details on error formats, validation rules, or examples, which could enhance completeness for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('input' as raw JSON text and 'indent' with enum values and default). The description adds no additional parameter semantics beyond what the schema provides, such as examples or constraints, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('format and validate JSON') and resource ('JSON'), and distinguishes it from sibling tools by focusing on JSON processing rather than encoding/decoding or other utilities. However, it doesn't explicitly differentiate from potential JSON-specific siblings (though none exist in the provided list).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('format and validate JSON') but doesn't provide explicit guidance on when to use this tool versus alternatives (e.g., for JSON vs. other data formats) or any prerequisites. It mentions the return behavior, which hints at usage for validation, but lacks clear when/when-not instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jwt_decodeAInspect
Decode a JWT into its header, payload, and signature parts. Does NOT verify the signature.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | JWT compact serialization (three dot-separated segments) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it decodes into parts and does not verify signatures. However, it lacks details on error handling, output format, or performance aspects, leaving some gaps in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, consisting of two sentences that directly convey the purpose and key limitation. Every sentence adds essential value with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is mostly complete. It covers the core functionality and limitation well, but could benefit from mentioning the output structure (e.g., JSON object with header/payload/signature) to fully compensate for the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the 'token' parameter. The description adds no additional parameter semantics beyond what the schema provides, such as examples or constraints, resulting in the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Decode a JWT') and the resource ('JWT') into its components ('header, payload, and signature parts'). It distinguishes itself from potential verification tools by explicitly stating 'Does NOT verify the signature,' which is a key differentiator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for decoding JWTs without verification. However, it does not explicitly mention when not to use it or name alternatives (e.g., verification tools), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_decodeAInspect
Decode a percent-encoded URL component.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Percent-encoded string |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool decodes percent-encoded strings, which implies a read-only transformation without side effects, but it doesn't disclose error handling, performance limits, or output format details. This is minimal but adequate for a simple tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It's front-loaded and appropriately sized for a straightforward decoding tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no annotations, no output schema), the description is mostly complete. It clearly defines the action and input, but lacks details on output format or error cases, which could be helpful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'input' parameter documented as 'Percent-encoded string.' The description adds no additional parameter details beyond this, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Decode') and the resource ('a percent-encoded URL component'), distinguishing it from sibling tools like 'url_encode' (which does the opposite) and 'base64_decode' (which handles a different encoding format). It's precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'percent-encoded URL component,' suggesting it's for URL decoding. However, it doesn't explicitly state when to use this tool versus alternatives like 'base64_decode' or 'url_encode,' nor does it mention any prerequisites or exclusions, leaving some guidance gaps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url_encodeAInspect
Percent-encode a string for use in a URL component.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | String to encode |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool's function but lacks details on behavioral traits such as error handling, encoding standards (e.g., UTF-8), or performance considerations. The description is accurate but minimal, not adding rich context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is front-loaded with the core action and context, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is reasonably complete for its purpose. It covers what the tool does and its context, but could be enhanced with details on output format or error cases. The simplicity of the tool means the description suffices, though not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'input' documented as 'String to encode.' The description adds no additional semantic details beyond what the schema provides, such as examples or constraints. Given the high schema coverage, the baseline score of 3 is appropriate, as the description does not compensate with extra parameter insights.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Percent-encode') and resource ('a string'), specifying its purpose for 'use in a URL component.' It distinguishes from siblings like 'url_decode' by focusing on encoding rather than decoding, making the purpose unambiguous and well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for use in a URL component,' which helps guide when to apply this tool. However, it does not explicitly mention when not to use it or name alternatives (e.g., compared to 'base64_encode'), leaving some room for improvement in distinguishing from other encoding siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
user_agent_parseAInspect
Parse a User-Agent string into browser, OS, device, and engine components.
| Name | Required | Description | Default |
|---|---|---|---|
| ua | Yes | User-Agent header value |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool's function (parsing into components) but lacks details on behavioral traits like error handling, performance, or output format. The description doesn't contradict annotations, but offers minimal behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose and output components. It is front-loaded with no wasted words, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is adequate but incomplete. It explains what the tool does but lacks details on output structure, error cases, or usage context, which could help an agent use it correctly without trial and error.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'ua' documented as 'User-Agent header value'. The description adds no additional parameter semantics beyond what the schema provides, such as examples or constraints. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'parse' and the resource 'User-Agent string', specifying the output components (browser, OS, device, engine). It distinguishes from siblings like base64_encode or uuid_generate by focusing on parsing rather than encoding/generation, though it doesn't explicitly differentiate from all siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a User-Agent string needs parsing into components, but provides no explicit guidance on when to use this tool versus alternatives (e.g., if other tools handle similar parsing). It lacks context on prerequisites or exclusions, such as input format requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
uuid_generateBInspect
Generate a v4 (random) or v7 (time-ordered) UUID.
| Name | Required | Description | Default |
|---|---|---|---|
| version | Yes | UUID version |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the UUID versions but does not disclose behavioral traits like whether generation is deterministic, has rate limits, requires authentication, or what the output format looks like (e.g., string representation). This leaves significant gaps for a tool that produces identifiers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part earns its place by specifying the action and available versions, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema, no annotations), the description is incomplete. It lacks details on output format, error handling, or behavioral context (e.g., idempotency), which are important for an agent to use the tool correctly. The description does not compensate for the absence of annotations or output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'version' fully documented in the schema (enum: v4, v7). The description adds minimal value by restating the versions but does not provide additional semantics, such as explaining the differences between v4 and v7 beyond 'random' vs. 'time-ordered'. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate') and resource ('UUID'), specifying the types available ('v4 (random) or v7 (time-ordered)'). It distinguishes the tool's purpose from siblings like base64_encode or json_format by focusing on UUID generation, not data transformation or parsing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as when to choose v4 vs. v7 UUIDs based on use cases (e.g., randomness vs. time-ordering). It also lacks context on prerequisites or exclusions, leaving the agent to infer usage from the parameter alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!