Microapp
Server Details
Microapp offers premium utility tools for humans and AI agents, accessible at microapp.io and through this MCP endpoint.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 46 of 46 tools scored.
Most tools have distinct purposes. However, 'hex-to-rgb' is redundant with 'color-converter', which already handles hex-to-RGB conversion, causing potential confusion.
Names follow a consistent lowercase-with-hyphens style, but vary in pattern (e.g., 'angle-converter', 'average-calculator', 'dedup-lines'). One tool ('internal-do-not-call') deviates from the descriptive norm.
With 46 tools, the server is heavily populated. Many converters could be merged into a generic unit converter, and there is redundancy, making the surface unnecessarily large for a single server.
The server covers a broad range of utility domains: converters, text processing, math, cryptography, etc. Minor redundancies exist (e.g., hex-to-rgb vs color-converter), but the set is otherwise comprehensive.
Available Tools
46 toolsangle-converterARead-onlyIdempotentInspect
Convert between angle units: degree, radian, gradian, turn.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | Angle value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, which inform the agent that the tool is safe and idempotent. The description adds the specific supported units, providing useful context without contradicting the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that conveys the tool's purpose without any superfluous words. It is front-loaded and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple, with a well-defined input schema and an output schema present. The description, combined with the schema and annotations, provides complete information for an agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for all parameters. The tool description does not add additional meaning beyond what the schema provides, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts between angle units (degree, radian, gradian, turn). It uses a specific verb 'Convert' and lists the exact resources, distinguishing it from sibling converter tools like area-converter or length-converter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for converting angles but provides no explicit guidance on when to use this tool versus alternatives or any prerequisites. The purpose is clear, but no usage scenarios or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
area-converterARead-onlyIdempotentInspect
Convert between area units: square-meter, square-kilometer, square-foot, square-yard, acre, hectare, square-mile.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | Area value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, and destructiveHint, which convey safety and idempotency. The description adds the list of supported units, which is behavioral context not in annotations. It does not mention return format or precision, but with output schema present, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently communicates the purpose and supported units. No extraneous words or repetition of schema fields. It is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple conversion tool with a rich schema (100% coverage, enums, output schema), the description is adequately complete. It covers what the tool does and which units are supported. It does not explain conversion behavior (e.g., precision), but that is standard and implied.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with each parameter having a basic description (e.g., 'Source unit.'). The tool description adds the list of units, which is already present in the schema's enum. Thus, the description adds no meaningful semantics beyond the schema, earning a baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Convert between area units' and lists the seven supported units. It uses a specific verb 'Convert' and identifies the resource 'area units', which distinguishes it from sibling converters like length-converter or temperature-converter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use or avoid this tool versus alternatives is given. However, given the naming and context (many sibling converters), the purpose is self-explanatory. A minimal viable score of 3 is appropriate; missing explicit context for when-not-to-use keeps it from higher.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
aspect-ratioARead-onlyIdempotentInspect
Compute the simplified aspect ratio of a width × height pair, plus the decimal ratio. Useful for image, video, screen, and design contexts.
| Name | Required | Description | Default |
|---|---|---|---|
| width | Yes | Width in any unit. | |
| height | Yes | Height in same unit as width. |
Output Schema
| Name | Required | Description |
|---|---|---|
| gcd | Yes | Greatest common divisor used to simplify. |
| simplified | Yes | Simplified width and height. |
| ratio_string | Yes | Simplified ratio in 'W:H' form. |
| ratio_decimal | Yes | Decimal width-to-height ratio. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly/idempotent/destructive hints. Description adds that it returns simplified ratio and decimal, but doesn't disclose edge cases or limitations. Acceptable given annotations cover safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: function + usage context. No wasted words, front-loaded with action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple calculator with full schema and output schema, description provides complete context. No missing information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions. Description adds no additional parameter details beyond schema. Baseline 3 per instructions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it computes the simplified aspect ratio and decimal ratio. It distinguishes from sibling tools, which are other converters, and gives specific use contexts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions useful contexts (image, video, screen, design) but lacks explicit when-not-to-use or alternative tools. Since no closely related siblings, 4 is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
average-calculatorARead-onlyIdempotentInspect
Compute summary statistics for a list of numbers: count, sum, mean, median, min, max, and mode(s).
| Name | Required | Description | Default |
|---|---|---|---|
| numbers | Yes | Array of numbers to summarize. |
Output Schema
| Name | Required | Description |
|---|---|---|
| max | Yes | Largest value. |
| min | Yes | Smallest value. |
| sum | Yes | Sum of all inputs. |
| mean | Yes | Arithmetic mean. |
| count | Yes | How many numbers were summarized. |
| modes | Yes | Most frequently occurring values. |
| median | Yes | Middle value (or average of two middle values for even-length lists). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the safety profile is clear. The description adds the specific statistics computed, which is useful context but does not reveal additional behavioral traits beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that efficiently conveys the tool's purpose and outputs. No extraneous information; every word earns its place. Could be slightly more structured but is adequate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (so return format is documented elsewhere), the description sufficiently covers what the tool does. For a simple calculator tool with one parameter, it is complete: it specifies the statistics computed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a description for the 'numbers' parameter. The tool description lists statistics but does not add deeper meaning or constraints beyond what the schema provides, so it meets the baseline without adding extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Compute summary statistics' and the resource 'list of numbers'. It lists exactly which statistics are computed (count, sum, mean, median, min, max, mode(s)), distinguishing it from siblings like 'geometric-mean' or 'percentage-calculator'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for summarizing numeric lists but does not provide explicit guidance on when to use this tool versus alternatives like geometric-mean or when not to use it (e.g., for single values). No when-not or exclusion criteria are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
base64ARead-onlyIdempotentInspect
Encode plain text to base64 or decode base64 back to text. Use mode='encode' for plain→base64, mode='decode' for base64→plain. UTF-8 safe.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | encode: text → base64. decode: base64 → text. | |
| input | Yes | Input string. Plain text for encode, base64 for decode. |
Output Schema
| Name | Required | Description |
|---|---|---|
| mode | Yes | Mode used, echoed back. |
| output | Yes | The encoded or decoded result. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the description does not need to repeat safety. The description adds value by stating 'UTF-8 safe', which informs about character encoding handling. This goes beyond annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two brief sentences. The first sentence establishes purpose, the second gives usage guidance. No unnecessary words; every sentence serves a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, an output schema exists (as per context), so return values need not be described. The description covers encoding/decoding direction, UTF-8 safety, and the two parameters. It is complete for this tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with both mode and input well-described in the schema. The description restates the mode enumeration and adds 'UTF-8 safe', but does not significantly augment the schema's parameter documentation. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Encode plain text to base64 or decode base64 back to text.' It specifies the verb (encode/decode) and resource (base64 text), and it is distinguished from sibling converters since no other tool handles base64.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use each mode: 'Use mode='encode' for plain→base64, mode='decode' for base64→plain.' This gives clear context for usage, though it does not mention when not to use the tool or alternatives. For a simple tool, this is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
case-converterARead-onlyIdempotentInspect
Change the case of text: UPPERCASE, lowercase, Title Case, or Sentence case.
| Name | Required | Description | Default |
|---|---|---|---|
| case | Yes | Target case. upper=ALL CAPS, lower=all lower, title=Title Case Each Word, sentence=First letter only. | |
| text | Yes | Text to transform. |
Output Schema
| Name | Required | Description |
|---|---|---|
| case | Yes | Case used, echoed back. |
| output | Yes | Transformed text. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds no new behavioral context beyond what the schema provides. It does not disclose any traits like pure transformation or handling of edge cases. With annotations covering safety, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently communicates the core functionality. It is front-loaded with the action and lists the specific case options. No wasted words or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (as indicated by context) and the input schema is fully documented, the description adequately covers the tool's behavior. It specifies the transformation and case options, which is sufficient for a simple utility. No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions for both parameters (text and case). The description only restates the case types already defined in the schema. It does not add any new semantic meaning beyond what the schema provides, so a baseline of 3 is justified.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Change the case of text' and lists four specific case options (UPPERCASE, lowercase, Title Case, Sentence case). This clearly identifies the tool's purpose and distinguishes it from siblings, as no other sibling tool handles case conversion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implicitly clear given the unique purpose among siblings. However, the description does not explicitly state when to use this tool versus alternatives or when not to use it (e.g., for non-standard locale rules). The context of sibling tools provides enough differentiation, but explicit guidance is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
character-counterARead-onlyIdempotentInspect
Count characters in text, with and without spaces. Returns separate counts so you can answer questions like 'fits in a tweet (280 chars)?' or 'fits in an SMS (160 chars)?' without guessing.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to analyze. Capped at 60,000 characters. |
Output Schema
| Name | Required | Description |
|---|---|---|
| fits_sms | Yes | True if 160 characters or fewer. |
| characters | Yes | Total characters. |
| fits_tweet | Yes | True if 280 characters or fewer. |
| characters_no_spaces | Yes | Characters excluding ASCII whitespace. |
| characters_no_whitespace | Yes | Characters excluding all Unicode whitespace. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, and destructiveHint (false). The description adds that it returns separate counts (with/without spaces) and references the 60k character cap, which is also in the schema but reinforces behavioral expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core function and examples, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and annotations, the description fully covers what the tool does and how to use it. The use case examples help an agent select it correctly among many text utilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'text'. The description does not add new semantic details about the parameter itself, only context for its use. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool counts characters with and without spaces, and provides concrete use cases (tweet or SMS length checks). It distinguishes from siblings like word-counter and vowel-counter by focusing on character counting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear context for when to use the tool (e.g., checking if text fits Twitter or SMS limits). However, it does not explicitly mention when not to use it or alternatives, though the sibling tools list implies alternatives like word-counter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
color-converterARead-onlyIdempotentInspect
Convert a color between hex, RGB, HSL, and HSV representations. Auto-detects the input format from the string syntax.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | The input color string. Accepts: #RGB, #RRGGBB, #RGBA, #RRGGBBAA, rgb(r,g,b), rgba(r,g,b,a), hsl(h,s%,l%), hsla(h,s%,l%,a). Spaces optional. |
Output Schema
| Name | Required | Description |
|---|---|---|
| hex | Yes | 6-digit hex color with leading #. |
| hsl | Yes | HSL representation. |
| hsv | Yes | HSV representation. |
| rgb | Yes | RGB representation. |
| rgba | Yes | RGBA representation. |
| hex_with_alpha | Yes | 8-digit hex with alpha if alpha < 1, else same as hex. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds the important behavioral detail of auto-detecting input format, which is beyond annotation. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, front-loaded with the core action. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single string parameter and an output schema (not provided but exists), the description fully covers what the tool does and how input is handled. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the schema already lists accepted formats. The description reiterates auto-detection but adds little new semantic information beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts colors between hex, RGB, HSL, and HSV, and auto-detects input format, distinguishing it from more specific sibling tools like 'hex-to-rgb'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly conveys when to use (when needing color conversion) but does not explicitly state when not to use or provide alternatives. No exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
data-storage-converterARead-onlyIdempotentInspect
Convert between digital storage units: bit, byte, kilobyte (1024 B), megabyte, gigabyte, terabyte. Uses binary (1024-based) sizing.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. Uses binary (1024-based) sizing for byte→kilobyte etc. | |
| value | Yes | Storage value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explicitly states that conversions use binary (1024-based) sizing, which is key behavioral information. Annotations already indicate read-only and idempotent, so the description adds the specific binary sizing context. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, only two sentences, front-loaded with the core purpose and the critical detail about binary sizing. Every sentence is necessary and no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the full schema coverage, existing annotations, and presence of an output schema, the description is complete. It covers all essential aspects: purpose, units, and the binary sizing convention.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with descriptions for each parameter. The description adds value by listing units explicitly and reinforcing that from uses binary sizing, which is also in the schema description. Baseline is 3 due to full coverage, but the description provides additional clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts between digital storage units and lists all supported units (bit, byte, kilobyte, megabyte, gigabyte, terabyte), distinguishing it from sibling converters like length-converter or temperature-converter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it uses binary (1024-based) sizing, which guides usage, but does not explicitly state when to use this tool versus alternatives (e.g., if a user wants decimal-based conversion, this tool is not appropriate). No exclusions or alternatives are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
days-betweenARead-onlyIdempotentInspect
Calculate the number of days between two ISO 8601 dates. Returns days, weeks, months, and years (decimal). Inclusive count = absolute |end - start|.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | End date in ISO 8601 (YYYY-MM-DD or full ISO timestamp). | |
| start | Yes | Start date in ISO 8601 (YYYY-MM-DD or full ISO timestamp). |
Output Schema
| Name | Required | Description |
|---|---|---|
| days | Yes | Number of days between the dates (absolute). |
| weeks | Yes | Equivalent number of weeks. |
| years | Yes | Approximate number of years (assuming 365.25 days/year). |
| months | Yes | Approximate number of months (assuming 30.4375 days/month). |
| direction | Yes | Whether end comes after or before start. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (read-only, idempotent, non-destructive), the description adds that it returns weeks, months, years, and uses inclusive absolute difference, providing valuable behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose and followed by return details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and comprehensive annotations, the description covers all necessary context for correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema already describes both parameters with 100% coverage. The description adds no new parameter information beyond restating that they are ISO 8601 dates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it calculates days between two ISO 8601 dates, which is specific and distinct from sibling converter/calculator tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It clearly states the tool calculates date differences, which is the primary usage context. No explicit alternative or when-not-to-use guidance is needed given siblings are unrelated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dedup-linesARead-onlyIdempotentInspect
Remove duplicate lines from a text block. Optionally case-insensitive. Order is preserved (not sorted).
| Name | Required | Description | Default |
|---|---|---|---|
| keep | No | Which occurrence to keep. Default first. | |
| text | Yes | Multi-line text. | |
| case_insensitive | No | If true, lines that differ only in case are considered duplicates. Default false. |
Output Schema
| Name | Required | Description |
|---|---|---|
| kept | Yes | Number of lines kept. |
| output | Yes | Deduplicated text with newline-joined lines. |
| removed | Yes | Number of duplicate lines removed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, providing safety profile. The description adds behavioral details beyond annotations: order preservation and optional case-insensitive matching, which are not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the main action ('Remove duplicate lines') and include essential behavioral nuances (case-insensitivity, order preservation) with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, annotations, and full schema coverage, the description is sufficient. It covers purpose, key behaviors (case-insensitivity, order preservation), and implicitly the use case.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters fully. The description mentions case-insensitive matching but does not add new meaning beyond the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it removes duplicate lines from a text block, specifies optional case-insensitivity, and explicitly notes that order is preserved (not sorted). This distinguishes it from the sibling tool 'sort-lines' which sorts lines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly indicates the tool's purpose for deduplication and order preservation, implying when to use it. However, it does not explicitly mention when not to use it or directly compare with siblings like 'sort-lines', but the context of sibling tools provides sufficient differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discount-calculatorARead-onlyIdempotentInspect
Apply a percentage discount to a price. Returns sale price, amount saved, and the discount percentage echoed back.
| Name | Required | Description | Default |
|---|---|---|---|
| original | Yes | Original price. | |
| discount_percent | Yes | Discount percentage (e.g. 25 for 25%). |
Output Schema
| Name | Required | Description |
|---|---|---|
| original | Yes | Original price, echoed back. |
| you_save | Yes | Amount saved. |
| sale_price | Yes | Price after discount. |
| discount_percent | Yes | Discount percent, echoed back. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds useful behavioral details about returned values (sale price, amount saved, discount percentage) beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single clear sentence front-loading the action and results. Could be more structured but is efficient for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 parameters, output schema exists), the description sufficiently explains the tool's function and output. No gaps for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The tool description does not add new semantic meaning beyond the schema, so baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool applies a percentage discount to a price and lists the returned values. It is specific and distinct from sibling tools like percentage-calculator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs. alternatives such as percentage-calculator or tip-calculator. Usage is implied but unclear for an agent needing differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
energy-converterARead-onlyIdempotentInspect
Convert between energy units: joule, kilojoule, calorie (thermochemical), kilocalorie, watt-hour, btu.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | Energy value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the description carries less burden. It does not add behavioral details but does not contradict annotations; overall sufficient for a simple conversion tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that immediately communicates the tool's purpose. Every word adds value with no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the description combined with schema and annotations covers the essential aspects. A minor addition about rounding or typical use cases could enhance completeness, but it is already well-specified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter having a descriptive label. The description only lists units, adding no extra meaning beyond what the schema already provides, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function—converting between energy units—and lists the specific units supported, which distinguishes it from sibling converters for other physical quantities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (when you need energy unit conversion) but provides no guidance on when not to use it or alternatives; the sibling tools handle other domains, but this is not mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fraction-simplifierARead-onlyIdempotentInspect
Simplify a fraction to lowest terms. Returns the simplified numerator/denominator, the decimal value, the gcd used, and the equivalent percentage.
| Name | Required | Description | Default |
|---|---|---|---|
| numerator | Yes | Numerator (integer). | |
| denominator | Yes | Denominator (integer, non-zero). |
Output Schema
| Name | Required | Description |
|---|---|---|
| gcd | Yes | Greatest common divisor used. |
| decimal | Yes | Decimal value of the fraction. |
| percentage | Yes | Fraction expressed as a percentage. |
| simplified | Yes | Simplified fraction. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds value by detailing return values (simplified fraction, decimal, gcd, percentage) and the core action of simplification, providing full transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys purpose and output. Every word is essential; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, rich annotations, and existing output schema, the description is fully complete. It covers purpose, behavior, and return values, leaving no gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes both parameters with 100% coverage. The description adds that denominator must be non-zero, a constraint not fully captured in schema validation, thus adding meaningful context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool simplifies a fraction to lowest terms and lists return values. It distinguishes itself from sibling tools (e.g., converters, calculators) by focusing specifically on fraction simplification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for fractions but does not explicitly state when to use this tool instead of alternatives or provide exclusion criteria. Sibling tools vary widely, so more explicit guidance would be helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
geometric-meanARead-onlyIdempotentInspect
Compute the geometric mean (n-th root of the product) of a list of positive numbers. Also returns the arithmetic mean for comparison.
| Name | Required | Description | Default |
|---|---|---|---|
| numbers | Yes | Array of positive numbers. Negative or zero values invalidate the geometric mean. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | How many numbers were summarized. |
| geometric_mean | Yes | Geometric mean (n-th root of the product). |
| arithmetic_mean | Yes | Arithmetic mean, for comparison. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. Description adds that the tool also returns arithmetic mean, providing extra context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words; purpose and additional return value are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity, complete schema and annotations, and existence of output schema, the description provides all necessary context. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of the 'numbers' parameter including constraints (positive, non-zero). Description adds no additional meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Compute the geometric mean' and distinguishes from sibling tools like average-calculator by specifying the geometric mean and also returning the arithmetic mean for comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use (computing geometric mean of positives) and implicitly contrasts with sibling average-calculator, but does not explicitly state when not to use or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hex-to-rgbARead-onlyIdempotentInspect
Convert a hex color (e.g. #ff8800) to RGB and RGBA channel values. Accepts 3-, 4-, 6-, and 8-digit hex with or without the leading #.
| Name | Required | Description | Default |
|---|---|---|---|
| hex | Yes | Hex color string. Accepts #RGB, #RGBA, #RRGGBB, #RRGGBBAA, or the same forms without the leading #. |
Output Schema
| Name | Required | Description |
|---|---|---|
| a | Yes | Alpha channel (0-1, defaults to 1 if not present in input). |
| b | Yes | Blue channel (0-255). |
| g | Yes | Green channel (0-255). |
| r | Yes | Red channel (0-255). |
| rgb | Yes | CSS rgb() string. |
| rgba | Yes | CSS rgba() string. |
| hex_normalized | Yes | Normalized 6-digit hex string with leading #. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and idempotent behavior, which aligns with a color conversion. The description adds specific input format details (accepts 3-, 4-, 6-, 8-digit hex with or without #), enhancing transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short sentences that convey all essential information. No redundant words, making it concise and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the description is complete: it explains input format variations and the output type (RGB/RGBA). An output schema exists, so return values are covered. No gaps remain for this straightforward conversion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'hex' is fully described in the schema (100% coverage). The description adds no additional meaning beyond what the schema provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: converting hex color to RGB and RGBA channel values. It uses a specific verb ('convert') and resource ('hex color to RGB and RGBA'). This distinguishes it from siblings like 'color-converter' which may be more generic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., color-converter). There are no explicit when-to-use or when-not-to-use instructions, leaving an agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
internal-do-not-callARead-onlyIdempotentInspect
Internal — do not call. (This is a honeypot; calling it bans your IP.)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description contradicts annotations: annotations set readOnlyHint=true and destructiveHint=false, but description says calling it bans IP (destructive). This is a direct contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded with critical warning. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter honeypot tool with no output schema, the description is fully complete: it explains what it is and the consequence of calling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist; description correctly omits parameter details. Baseline score 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Internal — do not call' and explains it is a honeypot that bans IPs. This clearly defines its purpose as a trap, distinct from sibling utility tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description directly instructs not to call it and warns of the consequence (IP ban), providing perfect usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
json-formatterARead-onlyIdempotentInspect
Format JSON: pretty-print with indentation or minify to a single line. Validates the input — invalid JSON returns a clear error pointing at the problem.
| Name | Required | Description | Default |
|---|---|---|---|
| json | Yes | Input JSON text. Must parse as valid JSON. | |
| mode | No | pretty: indent with spaces (default). minify: strip all whitespace. | |
| indent | No | Spaces of indent for pretty mode. Default 2. |
Output Schema
| Name | Required | Description |
|---|---|---|
| mode | Yes | Mode used. |
| output | Yes | Formatted JSON string. |
| bytes_in | Yes | Length of the input string. |
| bytes_out | Yes | Length of the output string. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds validation behavior and error reporting beyond annotations, which already indicate read-only, idempotent, non-destructive traits. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences that directly convey the tool's purpose and key behavior without unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple formatting tool, the description covers all essential aspects: formatting options, validation, and error handling. An output schema exists, so return value details are not needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions the two format modes but does not elaborate on the indent parameter beyond what the schema already provides. With 100% schema coverage, description adds marginal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool formats JSON by pretty-printing or minifying, with validation. This distinguishes it from sibling utility tools that serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. Usage is implied by the tool's function, but alternatives among siblings are not addressed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jwt-decoderARead-onlyIdempotentInspect
Decode a JWT into its header and payload. Does NOT verify the signature — use this for inspection only, never to trust the token's claims.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | The JWT to decode. Standard 3-part base64url-encoded format: header.payload.signature. |
Output Schema
| Name | Required | Description |
|---|---|---|
| header | Yes | Decoded header JSON. |
| expired | Yes | Whether the token has expired, or null if no exp. |
| payload | Yes | Decoded payload JSON. |
| issued_at | Yes | ISO timestamp from 'iat' claim, or null. |
| expires_at | Yes | ISO timestamp from 'exp' claim, or null. |
| not_before | Yes | ISO timestamp from 'nbf' claim, or null. |
| signature_present | Yes | Whether the signature segment is non-empty. |
| signature_verified | Yes | Always false — this tool does not verify signatures. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds critical behavioral information beyond annotations: it explicitly states that the tool does NOT verify the signature. This, combined with annotations (readOnlyHint, idempotentHint, destructiveHint), gives a full picture of side-effect-free inspection.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the primary action and output. Every word adds value, and the structure is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is complete for a low-complexity tool. It specifies the output (header and payload), warns about no verification, and the presence of an output schema covers return values. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage and describes the token parameter adequately. The description does not add new semantic details beyond implying the token should be a JWT. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Decode', the resource 'JWT', and the output 'header and payload'. It also explicitly distinguishes from verification, ensuring the agent understands this is for inspection only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'use this for inspection only, never to trust the token's claims'. This clearly indicates when to use (inspection) and when not to (for trust), effectively differentiating from verification workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
length-converterARead-onlyIdempotentInspect
Convert between common length units: millimeter, centimeter, meter, kilometer, inch, foot, yard, mile.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | Length value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the safety profile is clear. The description adds no further behavioral traits beyond listing units, which is already in the schema. Since annotations cover the key behaviors, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no filler. It conveys the essential information efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations and output schema (present but not provided), the description is mostly complete. It might omit handling of invalid units or edge cases, but for a simple converter, this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all three parameters (value, from, to). The description redundantly lists the units but adds no extra semantic meaning beyond the enum values. Thus baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Convert' and resource 'length units', listing all supported units (millimeter to mile). It clearly distinguishes from sibling tools like angle-converter or temperature-converter by domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for converting between any two listed length units. While it doesn't exclude other tools, the sibling tools are all for different domains (angle, area, etc.), so the context is clear enough without explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
loan-calculatorARead-onlyIdempotentInspect
Compute the monthly payment, total interest, and total cost of a fixed-rate amortizing loan. Uses the standard amortization formula. All amounts use the same (unspecified) currency.
| Name | Required | Description | Default |
|---|---|---|---|
| principal | Yes | Loan amount (principal). | |
| term_years | Yes | Loan term in years. | |
| annual_rate_percent | Yes | Annual interest rate as a percentage (e.g. 6.5 for 6.5%). |
Output Schema
| Name | Required | Description |
|---|---|---|
| months | Yes | Term in months. |
| principal | Yes | Principal, echoed back. |
| term_years | Yes | Term in years, echoed back. |
| total_paid | Yes | Total paid (principal + interest). |
| total_interest | Yes | Total interest paid over the life of the loan. |
| monthly_payment | Yes | Monthly payment amount. |
| annual_rate_percent | Yes | Annual rate, echoed back. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, covering safety. The description adds that it uses the standard amortization formula and currency note, which provides minor extra context but no deeper behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core action, no wasted words. Very concise and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of loan calculations and the presence of an output schema, the description is mostly complete. It mentions the formula and currency. Minor missing details like compounding frequency or rounding, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all three parameters. The description does not add meaningful new information beyond what the schema already provides, so score is at baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool computes: monthly payment, total interest, and total cost of a fixed-rate amortizing loan. It uses specific verbs and resource, distinguishing it from sibling calculator tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates usage for fixed-rate amortizing loans, but lacks explicit guidance on when not to use or alternatives. However, given the sibling tools are unrelated, this is adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
palindrome-checkerARead-onlyIdempotentInspect
Test whether a word or phrase is a palindrome. Compares the input lowercased and stripped of non-alphanumeric characters against its reverse.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Word or phrase to test. |
Output Schema
| Name | Required | Description |
|---|---|---|
| length | Yes | Length of the cleaned input. |
| cleaned | Yes | Input lowercased and stripped of non-alphanumeric characters. |
| reversed | Yes | The cleaned input reversed. |
| is_palindrome | Yes | True if the input is a palindrome. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows it's a safe, idempotent read. The description adds behavioral details beyond annotations: lowercasing and stripping non-alphanumeric characters, which is not in the input schema. This gives the agent a precise understanding of the transformation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the purpose ('Test whether a word or phrase is a palindrome') and then providing the algorithm. No unnecessary words; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one parameter, straightforward logic) and presence of an output schema, the description sufficiently covers the tool's behavior. It explains the normalization and comparison logic, which is all an agent needs to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'text' described as 'Word or phrase to test.' The description adds meaning by explaining the preprocessing steps (lowercasing, stripping non-alphanumeric) and that the input is compared to its reverse, which is not in the schema description. This helps the agent understand the exact semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool tests whether a word or phrase is a palindrome, specifying the verb 'test' and the resource. It also explains the algorithm (lowercase, strip non-alphanumeric, compare to reverse), which distinguishes it from sibling tools like 'reverse-text' or 'case-converter'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but does not provide explicit guidance on when to use it versus alternatives. Since sibling tools are all different (e.g., angle-converter, word-counter), confusion is minimal, but no when-to-use or when-not-to-use context is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
password-generatorARead-onlyIdempotentInspect
Generate cryptographically random passwords. Caller controls length and character classes (uppercase / lowercase / numbers / symbols). Uses crypto.getRandomValues — strong enough for real passwords.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | How many passwords to generate. Defaults to 1. Max 50. | |
| length | No | Password length in characters. Defaults to 16. Min 4, max 128. | |
| numbers | No | Include 0-9. Defaults true. | |
| symbols | No | Include !@#$%^&*-_=+. Defaults true. | |
| lowercase | No | Include a-z. Defaults true. | |
| uppercase | No | Include A-Z. Defaults true. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | How many passwords were generated. |
| length | Yes | Password length used. |
| passwords | Yes | Array of generated passwords. |
| pool_size | Yes | Size of the character pool the passwords were drawn from. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, non-destructive behavior; description adds the detail that it uses crypto.getRandomValues, confirming strong cryptographic randomness for passwords.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with immediate purpose statement and no extraneous information, perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool, full schema coverage, output schema exists, and annotations cover safety, the description adds the cryptographic guarantee and is fully complete for agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the description merely groups parameters (uppercase/lowercase/numbers/symbols) without adding deeper semantic meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Generate cryptographically random passwords' with specification of caller control over character classes, differentiating it from sibling tools like uuid-generator and random-number.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for generating passwords with control over character classes and cryptographic strength, but does not explicitly mention when not to use or compare to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
percentage-calculatorARead-onlyIdempotentInspect
Calculate percentages three ways: what's X% of Y, what % is X of Y, and what's the % change from X to Y. Use mode='of' for the first form, mode='ratio' for the second, mode='change' for the third.
| Name | Required | Description | Default |
|---|---|---|---|
| a | Yes | First number. Meaning depends on mode (see description). | |
| b | Yes | Second number. Meaning depends on mode. | |
| mode | Yes | Which percentage operation. 'of' = X% of Y, 'ratio' = X is what % of Y, 'change' = % change from X to Y. | |
| precision | No | Decimal places in the result. Defaults to 2. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes | Calculated result. |
| formula | Yes | Formula used to compute the result. |
| explanation | Yes | Human-readable description of what was computed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, read-only behavior. The description adds context on the three calculation modes but does not introduce behavioral surprises.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the main purpose and then detail the modes. No extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple, and description, schema, and annotations together fully cover purpose, parameters, and behavior. Output schema exists so return format is documented elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. The description adds value by explaining the meaning of parameters relative to each mode, going beyond schema details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool calculates percentages in three distinct modes: 'of', 'ratio', and 'change'. It clearly distinguishes from sibling conversion/calculation tools on the same server.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on when to use each mode with examples like 'Use mode='of' for the first form'. It eliminates ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pressure-converterARead-onlyIdempotentInspect
Convert between pressure units: pascal, kilopascal, bar, psi, atmosphere, mmhg.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | Pressure value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states 'Convert' which aligns with the non-destructive nature indicated by annotations. It adds no behavioral detail beyond what annotations already provide (readOnlyHint, destructiveHint, idempotentHint). There is no contradiction, but no added value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, clear sentence front-loads the verb and resource, including a comprehensive unit list. Every word serves a purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a straightforward conversion tool with complete schema coverage and annotations, the description sufficiently names the operations and supported units. Output schema handles return format, and no additional context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description lists the units already enumerated in the schema, adding no additional semantic context beyond the schema's own descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts between pressure units, listing all supported units explicitly. It distinctly separates pressure-converter from sibling tools like angle-converter or temperature-converter, which serve different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit usage guidance such as when to use or avoid. However, the tool's purpose is self-explanatory and siblings are distinct converters, so no direct competition. The annotations (readOnlyHint, idempotentHint) hint at safe use, but the description could include a brief note on appropriate scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prime-checkerARead-onlyIdempotentInspect
Test whether an integer ≥ 2 is prime. Returns the verdict plus the prime factorization (useful for composite numbers).
| Name | Required | Description | Default |
|---|---|---|---|
| n | Yes | The integer to test. Must be ≥ 2. |
Output Schema
| Name | Required | Description |
|---|---|---|
| n | Yes | Input number, echoed back. |
| is_prime | Yes | True if n is prime. |
| factor_count | Yes | Number of prime factors (with multiplicity). |
| prime_factors | Yes | Prime factorization of n. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds that it returns both a primality verdict and prime factorization, providing extra behavioral context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the purpose and concisely adds the return value details. Every word contributes meaning; no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple, deterministic tool with a single parameter and an existing output schema, the description fully covers the necessary context: input constraint, behavior, and output content.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides a comprehensive description of parameter n (type, constraints, and purpose) with 100% coverage. The tool description does not add any further semantic meaning to the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('test'), the resource ('integer ≥ 2'), and the purpose ('is prime'). It also specifies the output (verdict and prime factorization), distinguishing it from sibling tools which are unrelated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly defines the usage context by specifying the input constraint (integer ≥ 2) and the nature of the output. However, it does not explicitly state when to use or avoid using the tool, though no alternative prime-checking tools exist among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
random-numberARead-onlyIdempotentInspect
Generate one or more random numbers in [min, max]. By default returns integers; pass integer=false for floats. Uses Math.random() (not crypto-strong).
| Name | Required | Description | Default |
|---|---|---|---|
| max | Yes | Inclusive upper bound. | |
| min | Yes | Inclusive lower bound. | |
| count | No | How many numbers to generate. Defaults to 1. Max 1000 per call. | |
| integer | No | If true, returns integers only. Defaults to true. |
Output Schema
| Name | Required | Description |
|---|---|---|
| max | Yes | Upper bound used (inclusive). |
| min | Yes | Lower bound used (inclusive). |
| count | Yes | How many numbers were generated. |
| integer | Yes | Whether integers were requested. |
| numbers | Yes | Array of generated numbers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds important behavioral info beyond annotations: mentions non-crypto-strong nature and default integer behavior. Annotations already declare readOnlyHint and idempotentHint, so description complements well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences covering purpose, defaults, and limitation. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Output schema exists, so return values not needed. Description covers input bounds, type, count limit, and cryptographic weakness – complete for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and includes defaults for count and integer. Description adds minimal extra: clarifies [min, max] inclusive and reinforces max count per call. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'generate' and resource 'random numbers' with bounds [min, max] and integer/float option. Unambiguous and distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use for non-cryptographic randomness via 'Math.random() (not crypto-strong)', but does not explicitly state when to use vs alternatives like password-generator or uuid-generator.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
regex-testerARead-onlyIdempotentInspect
Test a JavaScript regex against text. Returns all matches with their positions, captured groups, and named groups. Defaults to global flag so all occurrences are found.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to test against. | |
| flags | No | Regex flags. Allowed: g, i, m, s, u, y. Defaults to 'g'. | |
| pattern | Yes | The regex pattern, without delimiters. Standard JavaScript regex syntax. |
Output Schema
| Name | Required | Description |
|---|---|---|
| flags | Yes | Flags used. |
| matches | Yes | First 100 matches with positions and captured groups. |
| pattern | Yes | Regex pattern, echoed back. |
| truncated | Yes | True if more than 100 matches were truncated. |
| match_count | Yes | Total number of matches found. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint (true) and destructiveHint (false). The description adds context about default global behavior and output details (positions, groups) beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no unnecessary words, front-loaded with purpose. Every sentence provides useful information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (inferred), the description adequately covers the main behavior: output includes matches, positions, groups. It does not cover error handling, but that is acceptable for a utility tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 3 parameters. The description adds value by clarifying the default 'g' flag behavior and the nature of the operation. This exceeds the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Test a JavaScript regex against text' and specifies the output includes matches, positions, captured groups, and named groups. This is a specific verb-resource combination that distinguishes it from sibling tools like converters and calculators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that default global flag finds all occurrences, which guides usage for exhaustive matching. It does not explicitly exclude alternatives, but given unrelated siblings, this is sufficient. Could be improved by mentioning when to use flags parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rem-to-pxARead-onlyIdempotentInspect
Convert REM (relative to root font size) to pixels. Defaults to a 16px base (browser default); pass base_px to use a different root size.
| Name | Required | Description | Default |
|---|---|---|---|
| rem | Yes | The REM value to convert. | |
| base_px | No | Base font size in pixels. Defaults to 16 (browser default). |
Output Schema
| Name | Required | Description |
|---|---|---|
| rem | Yes | Input REM value, echoed back. |
| pixels | Yes | Equivalent pixel value. |
| base_px | Yes | Base font size used. |
| em_equivalent | Yes | Equivalent em value (same as rem in this context). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds practical context (default 16px base, optional override) without contradicting structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the purpose and key detail (default base). No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple conversion tool with a full input schema and an output schema (not shown but present), the description covers all needed context: purpose, default, and optional parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters have descriptions). The description reiterates the default base but adds no new meaning beyond what the schema already provides. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Convert' and the resource 'REM to pixels', specifying the default base and the optional base_px parameter. It distinguishes itself from sibling converters (e.g., length-converter) by focusing on REM units.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates when to use (when needing REM->PX conversion) but does not explicitly contrast with alternatives. However, given the narrow focus, the context is clear enough for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reverse-textARead-onlyIdempotentInspect
Reverse a string character-by-character. Unicode-aware — handles emoji and combining characters correctly using Array.from on the iterator.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to reverse. |
Output Schema
| Name | Required | Description |
|---|---|---|
| output | Yes | Reversed text, Unicode-safe. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent behavior. The description adds details about Unicode handling and internal implementation (Array.from), providing extra transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core action and key feature (Unicode-aware), no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple text reversal tool, the description covers behavior, the schema fully documents the parameter, and an output schema exists. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage with a description for the parameter. The tool description further explains the reversal is character-by-character and Unicode-aware, adding meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it reverses a string character-by-character and is Unicode-aware, which is specific and distinguishes it from sibling tools that are converters or calculators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear context for when to use it (reverse text), and sibling tools are unrelated, but no explicit exclusions or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
salary-to-hourlyARead-onlyIdempotentInspect
Convert an annual salary to hourly, daily, weekly, and monthly equivalents. Defaults to a standard 40-hour, 52-week schedule.
| Name | Required | Description | Default |
|---|---|---|---|
| annual_salary | Yes | Annual gross salary in the same (unspecified) currency as the output. | |
| hours_per_week | No | Working hours per week. Defaults to 40. | |
| weeks_per_year | No | Working weeks per year. Defaults to 52. |
Output Schema
| Name | Required | Description |
|---|---|---|
| annual | Yes | Annual rate, echoed back. |
| hourly | Yes | Hourly rate. |
| weekly | Yes | Weekly rate. |
| monthly | Yes | Monthly rate. |
| daily_8h | Yes | Daily rate at 8 hours/day. |
| hours_per_week | Yes | Hours per week used. |
| weeks_per_year | Yes | Weeks per year used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive. The description adds default schedule (40-hour, 52-week) and output types (hourly, daily, weekly, monthly), which provides useful context beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence directly states purpose and defaults. No unnecessary words; optimal length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, description doesn't need to detail return values. It covers conversion types and defaults. Param descriptions handle other details. Complete for this simple conversion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear descriptions. The description adds context about the output (daily and monthly equivalents) that isn't in param descriptions, raising it above baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts an annual salary to hourly, daily, weekly, and monthly equivalents, with clear verb 'Convert'. It distinguishes from sibling converters by specifying salary conversion, which no other sibling does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit instructions on when to use this tool versus alternatives. The description implies its use for salary conversion but doesn't provide when-not or contrast with similar tools like tip-calculator. Given its clear domain, it scores at the middle.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scientific-notationARead-onlyIdempotentInspect
Convert a number to scientific and engineering notation. Returns the coefficient, exponent, and several human-readable forms.
| Name | Required | Description | Default |
|---|---|---|---|
| n | Yes | The number to convert. Accepts standard or already-in-scientific-notation values. | |
| precision | No | Significant digits in the coefficient. Defaults to 4. |
Output Schema
| Name | Required | Description |
|---|---|---|
| exponent | Yes | The exponent part. |
| standard | Yes | Standard decimal form. |
| scientific | Yes | Number in scientific notation (e.g. '4.5 × 10^-5'). |
| coefficient | Yes | The coefficient part. |
| engineering | Yes | Engineering notation form. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true, signaling safety. The description adds return types but no additional behavioral details such as error handling or edge cases. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the primary action and outputs. Every word is necessary and no filler is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity and full schema coverage with annotations, the description adequately explains the return values. It lacks explanation of edge cases (e.g., zero) but is otherwise complete with the presence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, both parameters have descriptions. The tool description adds no further meaning beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts numbers to scientific and engineering notation, specifying the returned outputs (coefficient, exponent, human-readable forms). This verb+resource description distinguishes it from sibling tools like angle-converter or fraction-simplifier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for numeric notation conversion but does not explicitly state when to use it versus alternatives, nor does it provide exclusions or preconditions. The context of sibling converter/calculator tools makes the purpose clear, but no direct guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sha256-generatorARead-onlyIdempotentInspect
Compute the SHA-256 hash of a text string. Returns the digest as a 64-character lowercase hex string. UTF-8 encoded.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to hash. UTF-8 encoded before hashing. |
Output Schema
| Name | Required | Description |
|---|---|---|
| hex | Yes | 64-character lowercase hex digest. |
| algorithm | Yes | Hash algorithm name (always 'SHA-256'). |
| bytes_hashed | Yes | Number of bytes that were hashed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive. Description adds UTF-8 encoding and hex output format, enhancing transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no wasted words. Core information front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-param tool with clear annotations, the description fully covers purpose, input encoding, and output format. No output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameter with description. Tool description adds encoding detail but does not add significant new semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (compute), resource (SHA-256 hash of text), and output (64-character hex string). It uniquely identifies the tool among siblings like angle-converter or word-counter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use when a SHA-256 hash is needed. No explicit exclusion, but siblings are diverse so context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sort-linesARead-onlyIdempotentInspect
Sort the lines of a text block. Options: ascending or descending, optional dedupe, optional case-insensitive compare.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Multi-line text. Each line is a separate sort key. | |
| unique | No | If true, dedupe lines as part of the sort. Default false. | |
| direction | No | Sort direction. Default asc. | |
| case_insensitive | No | If true, compare case-insensitively. Default false. |
Output Schema
| Name | Required | Description |
|---|---|---|
| output | Yes | Sorted text with newline-joined lines. |
| line_count | Yes | Number of lines in the output. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. Description adds option details but no additional behavioral traits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 11 words, front-loads the verb and noun. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple sorting tool with full schema coverage and an output schema, the description covers all necessary aspects: action, options, and constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. Description lists options (ascending/descending, dedupe, case-insensitive) which matches schema properties, but adds little meaning beyond the schema's own descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Sort the lines of a text block' with specific options (ascending/descending, dedupe, case-insensitive). Distinct from sibling 'dedup-lines' which only deduplicates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use over siblings like 'dedup-lines'. The purpose is clear, but the description lacks explicit when-to-use or when-not-to-use advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
speed-converterARead-onlyIdempotentInspect
Convert between speed units: km/h, m/s, mph, knots, ft/s.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit: km/h, m/s, mph, knots, ft/s. | |
| value | Yes | Speed value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds no behavioral traits beyond the conversion operation, but the annotations carry the burden of transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence (12 words) that front-loads the action and resource, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With full schema coverage, annotations, and an existing output schema, the description provides all necessary context: the conversion action and the complete list of units. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for all three parameters. The description lists the units but does not add meaningful information beyond what is in the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Convert') and the resource ('speed units'), listing all supported units (km/h, m/s, mph, knots, ft/s), making it distinct from sibling conversion tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool or when not to, nor does it mention alternatives. Usage is implied by the domain (speed) and sibling context, but no direct guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
square-root-calculatorARead-onlyIdempotentInspect
Compute square root, cube root, square, and cube of a number. Returns all four so callers don't have to call four tools.
| Name | Required | Description | Default |
|---|---|---|---|
| n | Yes | The input number. Must be non-negative for the square root branch. |
Output Schema
| Name | Required | Description |
|---|---|---|
| cube | Yes | n³ |
| square | Yes | n² |
| cube_root | Yes | ∛n |
| square_root | Yes | √n |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, covering safety. The description adds that the tool returns all four computed values, which is behavioral context beyond annotations. However, it doesn't discuss edge cases like negative inputs, though the schema addresses that.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences: first stating the action, second explaining the benefit. No wasted words, front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (so return values are documented), the description is complete. It covers purpose, benefit, and the tool's composite nature. Annotations and schema handle the rest.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with the schema already describing parameter 'n' (non-negative for square root). The description does not add further semantics; baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool computes square root, cube root, square, and cube of a number. This specific verb+resource combination distinguishes it from sibling tools, especially by noting it returns all four operations in one call, avoiding multiple tool invocations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool is for when you need any of these four operations, with the bonus of avoiding multiple calls. While it doesn't explicitly state when not to use, the context of sibling tools (none provide these individual operations) makes the usage clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
temperature-converterARead-onlyIdempotentInspect
Convert between Celsius, Fahrenheit, and Kelvin. Pass value + from + to.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | The temperature value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint and idempotentHint, so the description adds minimal behavioral info. It doesn't detail output format or edge cases, but annotations cover safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. Information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, full annotations, and output schema, the description is adequate. It could mention precision or return format, but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents parameters. The description does not add any extra meaning beyond 'Convert between...', thus baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the function: converting between Celsius, Fahrenheit, and Kelvin. This clearly distinguishes it from sibling tools like angle-converter or length-converter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description instructs to 'Pass value + from + to', which implies basic usage. While it doesn't explicitly state when not to use, the tool's name and context among other converters make alternatives obvious.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tip-calculatorARead-onlyIdempotentInspect
Compute the tip, total, and per-person split for a bill. All amounts use the same (unspecified) currency — the engine doesn't care about currency codes.
| Name | Required | Description | Default |
|---|---|---|---|
| bill | Yes | Pre-tip bill amount, in the same currency as the output. | |
| split | No | Number of people splitting the bill. Defaults to 1. | |
| tip_percent | Yes | Tip percentage (e.g. 18 for 18%). |
Output Schema
| Name | Required | Description |
|---|---|---|
| tip | Yes | Tip amount. |
| bill | Yes | Pre-tip bill amount. |
| split | Yes | Number of people the bill is split between. |
| total | Yes | Total bill including tip. |
| per_person_tip | Yes | Each person's share of the tip. |
| per_person_total | Yes | Each person's share of the total. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true. Description adds value by noting the engine is currency-agnostic ('doesn't care about currency codes'), which is a key behavioral trait. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, front-loaded with purpose, then a clarifying note. No wasted words; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple calculator with an output schema, the description covers the core functionality and critical behavior (currency agnostic). No gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description does not add significant meaning beyond schema; it reinforces that all amounts use same currency, but that is already implied.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it computes tip, total, and per-person split for a bill. Specific verb+resource distinguishes it from sibling tools like 'percentage-calculator' and 'discount-calculator'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. alternatives such as 'percentage-calculator' or 'loan-calculator'. The description only states what it computes, not in what scenarios it is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url-encoderARead-onlyIdempotentInspect
URL-encode or URL-decode a string. Uses RFC 3986 component encoding (encodeURIComponent semantics).
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | encode: text → URL-encoded. decode: URL-encoded → text. | |
| input | Yes | Input string. |
Output Schema
| Name | Required | Description |
|---|---|---|
| mode | Yes | Mode used, echoed back. |
| output | Yes | The encoded or decoded result. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds the significant behavioral detail that encoding follows 'RFC 3986 component encoding (encodeURIComponent semantics)', which clarifies exactly how the encoding works. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys all essential information: the tool's purpose and the encoding standard. It is front-loaded with the key action and resource, leaving no room for redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (encode/decode with string input and output schema), the description fully covers the necessary context. It states both modes and the encoding standard, which is sufficient for correct usage. The presence of an output schema means return values are already documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides clear descriptions for both parameters (mode with enum, input with maxLength). The description adds the encoding standard context, further clarifying the behavior of the mode parameter. With 100% schema coverage, the description complements well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'URL-encode or URL-decode a string' with the specific verb and resource. It also specifies 'RFC 3986 component encoding (encodeURIComponent semantics)', which distinguishes it from any other URL-related tools like url-parser. The tool name matches the purpose exactly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for encoding/decoding strings for URLs, and the sibling tools context provides alternatives (e.g., base64, jwt-decoder). However, it does not explicitly state when not to use this tool or mention specific alternatives, so it lacks complete exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
url-parserARead-onlyIdempotentInspect
Parse a URL into its components: protocol, host, port, path, query parameters, hash. Returns the query string parsed into a key-value map.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to parse. Must include the protocol (http:// or https://). |
Output Schema
| Name | Required | Description |
|---|---|---|
| hash | Yes | Fragment identifier including leading '#', or empty. |
| host | Yes | Hostname plus port if present. |
| path | Yes | Pathname including leading slash. |
| port | Yes | Port number as a string, or null. |
| query | Yes | Raw query string including leading '?', or empty. |
| origin | Yes | Origin (protocol + host). |
| params | Yes | Parsed query parameters. |
| hostname | Yes | Hostname only. |
| protocol | Yes | URL protocol without trailing colon (e.g. 'https'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, providing a safe read operation profile. The description adds that it returns components, but does not disclose error handling for malformed URLs or partial inputs. Without additional behavioral context, the description meets minimum requirements but lacks depth for edge cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. Directly states purpose and output. Front-loaded with action verb and resource. Each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no nested objects, clear annotations), the description is adequate. It explains what the tool does and the output format. Minor omission: no mention of return type structure beyond query map, but output schema likely covers this. Score 4 for being nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%: the single 'url' parameter is described as 'The URL to parse. Must include the protocol (http:// or https://).' The tool description reinforces that parsing produces components, but does not add new semantics beyond the schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: parsing a URL into components (protocol, host, port, path, query parameters, hash) and returning query string as key-value map. This is a specific verb+resource that distinguishes it from sibling utilities like url-encoder or other converters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives, nor does it provide exclusions or prerequisites. However, the purpose is straightforward, and the intended use is implied by the description of parsing URL components.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
uuid-generatorARead-onlyIdempotentInspect
Generate one or more RFC 4122 v4 UUIDs (random). Returns an array of strings, even for a single UUID, so callers don't have to special-case length.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | How many UUIDs to generate. Defaults to 1. Max 100 per call. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | How many UUIDs were generated. |
| uuids | Yes | Array of generated UUIDs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by specifying the return format (array of strings) and the behavior for single UUIDs, which is beyond what annotations provide. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundant information. Every sentence adds value: first states purpose, second clarifies return type and behavior. Very concise and structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, strong annotations, and an output schema exists), the description is complete. It covers purpose, return format, and guidance on parameter (implied by 'one or more'), leaving no ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter 'count' is fully described with min, max, and default. The description does not add additional meaning or constraints beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates RFC 4122 v4 UUIDs (random) and returns an array of strings. It distinguishes itself from siblings like random-number or password-generator by specifying the UUID standard and return format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that it generates one or more UUIDs and that the return is always an array, which guides usage. However, it does not explicitly mention when not to use it or provide alternatives, but the context of simple utility tools makes the usage clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
volume-converterARead-onlyIdempotentInspect
Convert between volume units: liter, milliliter, cubic-meter, gallon-us, quart-us, pint-us, fluid-ounce-us, cup-us.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | Volume value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds no extra behavioral context beyond the conversion purpose, so it meets the baseline but adds no additional value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that directly conveys the tool's purpose and supported units with no redundant information. It is appropriately front-loaded and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, annotations covering safety, complete schema with descriptions, and an output schema, the description provides sufficient information for an agent to correctly invoke the tool. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for each parameter. The description lists the units already present in the schema enums, adding no meaningful extra meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts volume units and lists all supported units, making the verb and resource specific. The unit differentiation is sufficient given sibling tools are for other quantities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit when-to-use or when-not-to-use guidance. Usage is implied by the tool's purpose, but no explicit alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vowel-counterARead-onlyIdempotentInspect
Count vowels and consonants in text. ASCII-only — handles English-style letters. Returns vowel/consonant counts and the vowel ratio.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to analyze. |
Output Schema
| Name | Required | Description |
|---|---|---|
| words | Yes | Word count. |
| vowels | Yes | Vowel count. |
| letters | Yes | Total letters (vowels + consonants). |
| consonants | Yes | Consonant count. |
| vowel_ratio_percent | Yes | Vowels as a percentage of letters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. Description adds ASCII-only constraint and specifies exact outputs (counts and ratio). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loading key information: what it does, constraints, and outputs. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, the description covers purpose, input constraints, and return values. No missing elements for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'text' described as 'Text to analyze.' Description doesn't significantly add beyond schema, but clarifies ASCII-only handling. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it counts vowels and consonants, distinct from sibling tools like character-counter or word-counter. Specific verb 'Count' with clear resource 'vowels and consonants in text'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states ASCII-only and handles English-style letters, implying usage for English text and non-use for non-ASCII. Could be stronger by directly mentioning when not to use, but still clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
weight-converterARead-onlyIdempotentInspect
Convert between common weight units: milligram, gram, kilogram, ounce, pound, stone, ton.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target unit. | |
| from | Yes | Source unit. | |
| value | Yes | Weight value to convert. |
Output Schema
| Name | Required | Description |
|---|---|---|
| to | Yes | Target unit, echoed back. |
| from | Yes | Source unit, echoed back. |
| value | Yes | Input value, echoed back. |
| result | Yes | Converted value in the target unit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the description need not repeat those. However, the description adds no additional behavioral context (e.g., precision, rounding, or error handling) beyond the basic conversion action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the action ('Convert between common weight units') and immediately lists the supported units. Every word is necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, full schema coverage, output schema presence, and informative annotations, the description provides sufficient context for an agent to correctly invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already documents the three parameters (value, from, to). The description lists the allowed units, which mirrors the enum values in the schema, adding marginal semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts between common weight units and lists the specific units (milligram, gram, kilogram, ounce, pound, stone, ton). This distinguishes it from sibling tools like length-converter or temperature-converter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While the name and description imply use for weight conversion, there are no explicit instructions about prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whitespace-removerARead-onlyIdempotentInspect
Remove or collapse whitespace in text. Modes: trim (edges only), collapse (also fold internal runs to one space), all (strip every whitespace character).
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | trim: leading/trailing only. collapse: also collapse internal runs to one space. all: remove every whitespace character. Default: collapse. | |
| text | Yes | Text to clean. |
Output Schema
| Name | Required | Description |
|---|---|---|
| mode | Yes | Mode used, echoed back. |
| output | Yes | Cleaned text. |
| removed | Yes | Number of characters removed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds specifics on modes and their effects, providing additional behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with a list of modes, front-loaded with purpose. No wasted words; highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple text transformation tool, the description combined with input schema fully specifies behavior. Output schema exists but is not needed to explain return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters with descriptions (100% coverage). The description adds the default value for mode, which is not in the schema, adding extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool removes or collapses whitespace, with specific verbs and resource. It distinguishes from siblings as a unique text utility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains each mode and when to use them, but does not explicitly state when not to use this tool or mention sibling alternatives. However, given the distinct purpose, it's clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
word-counterARead-onlyIdempotentInspect
Count words, characters, sentences, paragraphs, and reading time in a block of text. Words are Unicode-aware (handles non-Latin scripts). Reading time assumes 240 wpm.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The text to analyze. Capped at 60,000 characters. Larger inputs should be split client-side. |
Output Schema
| Name | Required | Description |
|---|---|---|
| words | Yes | Word count. |
| sentences | Yes | Sentence count. |
| characters | Yes | Character count including spaces. |
| paragraphs | Yes | Paragraph count. |
| reading_minutes | Yes | Estimated reading time in minutes at 240 wpm. |
| characters_no_spaces | Yes | Character count excluding spaces. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows it's a safe, idempotent read operation. The description adds that it is Unicode-aware and reading time assumes 240 wpm, but beyond that, no additional behavioral traits are disclosed. The description does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the key counts. Every sentence adds necessary detail (Unicode-awareness and reading time assumption). No unnecessary words. It is appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, output schema exists), the description covers all essential aspects: what it counts, assumptions for reading time, and character limits. The description is complete for an agent to understand the tool's function and usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'text' is fully described in the input schema (maxLength, description about splitting). The description adds value by listing what metrics are computed (words, characters, sentences, paragraphs, reading time), which is not in the schema. This helps the agent understand how the input is processed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool counts words, characters, sentences, paragraphs, and reading time in a block of text. It distinguishes itself from sibling tools (mostly converters and calculators) by being a text analysis tool. The verb 'count' and the listed resources clearly define its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly indicates what the tool does, making its usage context apparent. However, it does not explicitly differentiate from the sibling 'character-counter' tool, which may also count characters. Although the description covers more metrics, lack of explicit guidance on when to choose this over a similar tool slightly reduces the score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!