Skip to main content
Glama

Server Details

Microapp offers premium utility tools for humans and AI agents, accessible at microapp.io and through this MCP endpoint.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 46 of 46 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes. However, 'hex-to-rgb' is redundant with 'color-converter', which already handles hex-to-RGB conversion, causing potential confusion.

Naming Consistency4/5

Names follow a consistent lowercase-with-hyphens style, but vary in pattern (e.g., 'angle-converter', 'average-calculator', 'dedup-lines'). One tool ('internal-do-not-call') deviates from the descriptive norm.

Tool Count2/5

With 46 tools, the server is heavily populated. Many converters could be merged into a generic unit converter, and there is redundancy, making the surface unnecessarily large for a single server.

Completeness4/5

The server covers a broad range of utility domains: converters, text processing, math, cryptography, etc. Minor redundancies exist (e.g., hex-to-rgb vs color-converter), but the set is otherwise comprehensive.

Available Tools

46 tools
angle-converterA
Read-onlyIdempotent
Inspect

Convert between angle units: degree, radian, gradian, turn.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesAngle value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, which inform the agent that the tool is safe and idempotent. The description adds the specific supported units, providing useful context without contradicting the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that conveys the tool's purpose without any superfluous words. It is front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple, with a well-defined input schema and an output schema present. The description, combined with the schema and annotations, provides complete information for an agent to understand and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all parameters. The tool description does not add additional meaning beyond what the schema provides, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts between angle units (degree, radian, gradian, turn). It uses a specific verb 'Convert' and lists the exact resources, distinguishing it from sibling converter tools like area-converter or length-converter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for converting angles but provides no explicit guidance on when to use this tool versus alternatives or any prerequisites. The purpose is clear, but no usage scenarios or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

area-converterA
Read-onlyIdempotent
Inspect

Convert between area units: square-meter, square-kilometer, square-foot, square-yard, acre, hectare, square-mile.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesArea value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, and destructiveHint, which convey safety and idempotency. The description adds the list of supported units, which is behavioral context not in annotations. It does not mention return format or precision, but with output schema present, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the purpose and supported units. No extraneous words or repetition of schema fields. It is front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool with a rich schema (100% coverage, enums, output schema), the description is adequately complete. It covers what the tool does and which units are supported. It does not explain conversion behavior (e.g., precision), but that is standard and implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with each parameter having a basic description (e.g., 'Source unit.'). The tool description adds the list of units, which is already present in the schema's enum. Thus, the description adds no meaningful semantics beyond the schema, earning a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Convert between area units' and lists the seven supported units. It uses a specific verb 'Convert' and identifies the resource 'area units', which distinguishes it from sibling converters like length-converter or temperature-converter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use or avoid this tool versus alternatives is given. However, given the naming and context (many sibling converters), the purpose is self-explanatory. A minimal viable score of 3 is appropriate; missing explicit context for when-not-to-use keeps it from higher.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

aspect-ratioA
Read-onlyIdempotent
Inspect

Compute the simplified aspect ratio of a width × height pair, plus the decimal ratio. Useful for image, video, screen, and design contexts.

ParametersJSON Schema
NameRequiredDescriptionDefault
widthYesWidth in any unit.
heightYesHeight in same unit as width.

Output Schema

ParametersJSON Schema
NameRequiredDescription
gcdYesGreatest common divisor used to simplify.
simplifiedYesSimplified width and height.
ratio_stringYesSimplified ratio in 'W:H' form.
ratio_decimalYesDecimal width-to-height ratio.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive hints. Description adds that it returns simplified ratio and decimal, but doesn't disclose edge cases or limitations. Acceptable given annotations cover safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: function + usage context. No wasted words, front-loaded with action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator with full schema and output schema, description provides complete context. No missing information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions. Description adds no additional parameter details beyond schema. Baseline 3 per instructions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it computes the simplified aspect ratio and decimal ratio. It distinguishes from sibling tools, which are other converters, and gives specific use contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions useful contexts (image, video, screen, design) but lacks explicit when-not-to-use or alternative tools. Since no closely related siblings, 4 is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

average-calculatorA
Read-onlyIdempotent
Inspect

Compute summary statistics for a list of numbers: count, sum, mean, median, min, max, and mode(s).

ParametersJSON Schema
NameRequiredDescriptionDefault
numbersYesArray of numbers to summarize.

Output Schema

ParametersJSON Schema
NameRequiredDescription
maxYesLargest value.
minYesSmallest value.
sumYesSum of all inputs.
meanYesArithmetic mean.
countYesHow many numbers were summarized.
modesYesMost frequently occurring values.
medianYesMiddle value (or average of two middle values for even-length lists).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the safety profile is clear. The description adds the specific statistics computed, which is useful context but does not reveal additional behavioral traits beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose and outputs. No extraneous information; every word earns its place. Could be slightly more structured but is adequate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists (so return format is documented elsewhere), the description sufficiently covers what the tool does. For a simple calculator tool with one parameter, it is complete: it specifies the statistics computed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a description for the 'numbers' parameter. The tool description lists statistics but does not add deeper meaning or constraints beyond what the schema provides, so it meets the baseline without adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Compute summary statistics' and the resource 'list of numbers'. It lists exactly which statistics are computed (count, sum, mean, median, min, max, mode(s)), distinguishing it from siblings like 'geometric-mean' or 'percentage-calculator'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for summarizing numeric lists but does not provide explicit guidance on when to use this tool versus alternatives like geometric-mean or when not to use it (e.g., for single values). No when-not or exclusion criteria are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

base64A
Read-onlyIdempotent
Inspect

Encode plain text to base64 or decode base64 back to text. Use mode='encode' for plain→base64, mode='decode' for base64→plain. UTF-8 safe.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesencode: text → base64. decode: base64 → text.
inputYesInput string. Plain text for encode, base64 for decode.

Output Schema

ParametersJSON Schema
NameRequiredDescription
modeYesMode used, echoed back.
outputYesThe encoded or decoded result.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the description does not need to repeat safety. The description adds value by stating 'UTF-8 safe', which informs about character encoding handling. This goes beyond annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two brief sentences. The first sentence establishes purpose, the second gives usage guidance. No unnecessary words; every sentence serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, an output schema exists (as per context), so return values need not be described. The description covers encoding/decoding direction, UTF-8 safety, and the two parameters. It is complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both mode and input well-described in the schema. The description restates the mode enumeration and adds 'UTF-8 safe', but does not significantly augment the schema's parameter documentation. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Encode plain text to base64 or decode base64 back to text.' It specifies the verb (encode/decode) and resource (base64 text), and it is distinguished from sibling converters since no other tool handles base64.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use each mode: 'Use mode='encode' for plain→base64, mode='decode' for base64→plain.' This gives clear context for usage, though it does not mention when not to use the tool or alternatives. For a simple tool, this is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

case-converterA
Read-onlyIdempotent
Inspect

Change the case of text: UPPERCASE, lowercase, Title Case, or Sentence case.

ParametersJSON Schema
NameRequiredDescriptionDefault
caseYesTarget case. upper=ALL CAPS, lower=all lower, title=Title Case Each Word, sentence=First letter only.
textYesText to transform.

Output Schema

ParametersJSON Schema
NameRequiredDescription
caseYesCase used, echoed back.
outputYesTransformed text.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds no new behavioral context beyond what the schema provides. It does not disclose any traits like pure transformation or handling of edge cases. With annotations covering safety, a score of 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently communicates the core functionality. It is front-loaded with the action and lists the specific case options. No wasted words or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists (as indicated by context) and the input schema is fully documented, the description adequately covers the tool's behavior. It specifies the transformation and case options, which is sufficient for a simple utility. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for both parameters (text and case). The description only restates the case types already defined in the schema. It does not add any new semantic meaning beyond what the schema provides, so a baseline of 3 is justified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Change the case of text' and lists four specific case options (UPPERCASE, lowercase, Title Case, Sentence case). This clearly identifies the tool's purpose and distinguishes it from siblings, as no other sibling tool handles case conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implicitly clear given the unique purpose among siblings. However, the description does not explicitly state when to use this tool versus alternatives or when not to use it (e.g., for non-standard locale rules). The context of sibling tools provides enough differentiation, but explicit guidance is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

character-counterA
Read-onlyIdempotent
Inspect

Count characters in text, with and without spaces. Returns separate counts so you can answer questions like 'fits in a tweet (280 chars)?' or 'fits in an SMS (160 chars)?' without guessing.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to analyze. Capped at 60,000 characters.

Output Schema

ParametersJSON Schema
NameRequiredDescription
fits_smsYesTrue if 160 characters or fewer.
charactersYesTotal characters.
fits_tweetYesTrue if 280 characters or fewer.
characters_no_spacesYesCharacters excluding ASCII whitespace.
characters_no_whitespaceYesCharacters excluding all Unicode whitespace.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, idempotentHint, and destructiveHint (false). The description adds that it returns separate counts (with/without spaces) and references the 60k character cap, which is also in the schema but reinforces behavioral expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core function and examples, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and annotations, the description fully covers what the tool does and how to use it. The use case examples help an agent select it correctly among many text utilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'text'. The description does not add new semantic details about the parameter itself, only context for its use. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool counts characters with and without spaces, and provides concrete use cases (tweet or SMS length checks). It distinguishes from siblings like word-counter and vowel-counter by focusing on character counting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear context for when to use the tool (e.g., checking if text fits Twitter or SMS limits). However, it does not explicitly mention when not to use it or alternatives, though the sibling tools list implies alternatives like word-counter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

color-converterA
Read-onlyIdempotent
Inspect

Convert a color between hex, RGB, HSL, and HSV representations. Auto-detects the input format from the string syntax.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesThe input color string. Accepts: #RGB, #RRGGBB, #RGBA, #RRGGBBAA, rgb(r,g,b), rgba(r,g,b,a), hsl(h,s%,l%), hsla(h,s%,l%,a). Spaces optional.

Output Schema

ParametersJSON Schema
NameRequiredDescription
hexYes6-digit hex color with leading #.
hslYesHSL representation.
hsvYesHSV representation.
rgbYesRGB representation.
rgbaYesRGBA representation.
hex_with_alphaYes8-digit hex with alpha if alpha < 1, else same as hex.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds the important behavioral detail of auto-detecting input format, which is beyond annotation. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words, front-loaded with the core action. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single string parameter and an output schema (not provided but exists), the description fully covers what the tool does and how input is handled. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the schema already lists accepted formats. The description reiterates auto-detection but adds little new semantic information beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it converts colors between hex, RGB, HSL, and HSV, and auto-detects input format, distinguishing it from more specific sibling tools like 'hex-to-rgb'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly conveys when to use (when needing color conversion) but does not explicitly state when not to use or provide alternatives. No exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

data-storage-converterA
Read-onlyIdempotent
Inspect

Convert between digital storage units: bit, byte, kilobyte (1024 B), megabyte, gigabyte, terabyte. Uses binary (1024-based) sizing.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit. Uses binary (1024-based) sizing for byte→kilobyte etc.
valueYesStorage value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explicitly states that conversions use binary (1024-based) sizing, which is key behavioral information. Annotations already indicate read-only and idempotent, so the description adds the specific binary sizing context. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, only two sentences, front-loaded with the core purpose and the critical detail about binary sizing. Every sentence is necessary and no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the full schema coverage, existing annotations, and presence of an output schema, the description is complete. It covers all essential aspects: purpose, units, and the binary sizing convention.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with descriptions for each parameter. The description adds value by listing units explicitly and reinforcing that from uses binary sizing, which is also in the schema description. Baseline is 3 due to full coverage, but the description provides additional clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts between digital storage units and lists all supported units (bit, byte, kilobyte, megabyte, gigabyte, terabyte), distinguishing it from sibling converters like length-converter or temperature-converter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions it uses binary (1024-based) sizing, which guides usage, but does not explicitly state when to use this tool versus alternatives (e.g., if a user wants decimal-based conversion, this tool is not appropriate). No exclusions or alternatives are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

days-betweenA
Read-onlyIdempotent
Inspect

Calculate the number of days between two ISO 8601 dates. Returns days, weeks, months, and years (decimal). Inclusive count = absolute |end - start|.

ParametersJSON Schema
NameRequiredDescriptionDefault
endYesEnd date in ISO 8601 (YYYY-MM-DD or full ISO timestamp).
startYesStart date in ISO 8601 (YYYY-MM-DD or full ISO timestamp).

Output Schema

ParametersJSON Schema
NameRequiredDescription
daysYesNumber of days between the dates (absolute).
weeksYesEquivalent number of weeks.
yearsYesApproximate number of years (assuming 365.25 days/year).
monthsYesApproximate number of months (assuming 30.4375 days/month).
directionYesWhether end comes after or before start.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (read-only, idempotent, non-destructive), the description adds that it returns weeks, months, years, and uses inclusive absolute difference, providing valuable behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose and followed by return details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and comprehensive annotations, the description covers all necessary context for correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema already describes both parameters with 100% coverage. The description adds no new parameter information beyond restating that they are ISO 8601 dates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates days between two ISO 8601 dates, which is specific and distinct from sibling converter/calculator tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It clearly states the tool calculates date differences, which is the primary usage context. No explicit alternative or when-not-to-use guidance is needed given siblings are unrelated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

dedup-linesA
Read-onlyIdempotent
Inspect

Remove duplicate lines from a text block. Optionally case-insensitive. Order is preserved (not sorted).

ParametersJSON Schema
NameRequiredDescriptionDefault
keepNoWhich occurrence to keep. Default first.
textYesMulti-line text.
case_insensitiveNoIf true, lines that differ only in case are considered duplicates. Default false.

Output Schema

ParametersJSON Schema
NameRequiredDescription
keptYesNumber of lines kept.
outputYesDeduplicated text with newline-joined lines.
removedYesNumber of duplicate lines removed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, providing safety profile. The description adds behavioral details beyond annotations: order preservation and optional case-insensitive matching, which are not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the main action ('Remove duplicate lines') and include essential behavioral nuances (case-insensitivity, order preservation) with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, annotations, and full schema coverage, the description is sufficient. It covers purpose, key behaviors (case-insensitivity, order preservation), and implicitly the use case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters fully. The description mentions case-insensitive matching but does not add new meaning beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it removes duplicate lines from a text block, specifies optional case-insensitivity, and explicitly notes that order is preserved (not sorted). This distinguishes it from the sibling tool 'sort-lines' which sorts lines.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly indicates the tool's purpose for deduplication and order preservation, implying when to use it. However, it does not explicitly mention when not to use it or directly compare with siblings like 'sort-lines', but the context of sibling tools provides sufficient differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discount-calculatorA
Read-onlyIdempotent
Inspect

Apply a percentage discount to a price. Returns sale price, amount saved, and the discount percentage echoed back.

ParametersJSON Schema
NameRequiredDescriptionDefault
originalYesOriginal price.
discount_percentYesDiscount percentage (e.g. 25 for 25%).

Output Schema

ParametersJSON Schema
NameRequiredDescription
originalYesOriginal price, echoed back.
you_saveYesAmount saved.
sale_priceYesPrice after discount.
discount_percentYesDiscount percent, echoed back.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds useful behavioral details about returned values (sale price, amount saved, discount percentage) beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence front-loading the action and results. Could be more structured but is efficient for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (2 parameters, output schema exists), the description sufficiently explains the tool's function and output. No gaps for its simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The tool description does not add new semantic meaning beyond the schema, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool applies a percentage discount to a price and lists the returned values. It is specific and distinct from sibling tools like percentage-calculator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs. alternatives such as percentage-calculator or tip-calculator. Usage is implied but unclear for an agent needing differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

energy-converterA
Read-onlyIdempotent
Inspect

Convert between energy units: joule, kilojoule, calorie (thermochemical), kilocalorie, watt-hour, btu.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesEnergy value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the description carries less burden. It does not add behavioral details but does not contradict annotations; overall sufficient for a simple conversion tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that immediately communicates the tool's purpose. Every word adds value with no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description combined with schema and annotations covers the essential aspects. A minor addition about rounding or typical use cases could enhance completeness, but it is already well-specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter having a descriptive label. The description only lists units, adding no extra meaning beyond what the schema already provides, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function—converting between energy units—and lists the specific units supported, which distinguishes it from sibling converters for other physical quantities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (when you need energy unit conversion) but provides no guidance on when not to use it or alternatives; the sibling tools handle other domains, but this is not mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fraction-simplifierA
Read-onlyIdempotent
Inspect

Simplify a fraction to lowest terms. Returns the simplified numerator/denominator, the decimal value, the gcd used, and the equivalent percentage.

ParametersJSON Schema
NameRequiredDescriptionDefault
numeratorYesNumerator (integer).
denominatorYesDenominator (integer, non-zero).

Output Schema

ParametersJSON Schema
NameRequiredDescription
gcdYesGreatest common divisor used.
decimalYesDecimal value of the fraction.
percentageYesFraction expressed as a percentage.
simplifiedYesSimplified fraction.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds value by detailing return values (simplified fraction, decimal, gcd, percentage) and the core action of simplification, providing full transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys purpose and output. Every word is essential; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, rich annotations, and existing output schema, the description is fully complete. It covers purpose, behavior, and return values, leaving no gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes both parameters with 100% coverage. The description adds that denominator must be non-zero, a constraint not fully captured in schema validation, thus adding meaningful context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool simplifies a fraction to lowest terms and lists return values. It distinguishes itself from sibling tools (e.g., converters, calculators) by focusing specifically on fraction simplification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for fractions but does not explicitly state when to use this tool instead of alternatives or provide exclusion criteria. Sibling tools vary widely, so more explicit guidance would be helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geometric-meanA
Read-onlyIdempotent
Inspect

Compute the geometric mean (n-th root of the product) of a list of positive numbers. Also returns the arithmetic mean for comparison.

ParametersJSON Schema
NameRequiredDescriptionDefault
numbersYesArray of positive numbers. Negative or zero values invalidate the geometric mean.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesHow many numbers were summarized.
geometric_meanYesGeometric mean (n-th root of the product).
arithmetic_meanYesArithmetic mean, for comparison.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. Description adds that the tool also returns arithmetic mean, providing extra context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words; purpose and additional return value are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity, complete schema and annotations, and existence of output schema, the description provides all necessary context. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of the 'numbers' parameter including constraints (positive, non-zero). Description adds no additional meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Compute the geometric mean' and distinguishes from sibling tools like average-calculator by specifying the geometric mean and also returning the arithmetic mean for comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use (computing geometric mean of positives) and implicitly contrasts with sibling average-calculator, but does not explicitly state when not to use or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hex-to-rgbA
Read-onlyIdempotent
Inspect

Convert a hex color (e.g. #ff8800) to RGB and RGBA channel values. Accepts 3-, 4-, 6-, and 8-digit hex with or without the leading #.

ParametersJSON Schema
NameRequiredDescriptionDefault
hexYesHex color string. Accepts #RGB, #RGBA, #RRGGBB, #RRGGBBAA, or the same forms without the leading #.

Output Schema

ParametersJSON Schema
NameRequiredDescription
aYesAlpha channel (0-1, defaults to 1 if not present in input).
bYesBlue channel (0-255).
gYesGreen channel (0-255).
rYesRed channel (0-255).
rgbYesCSS rgb() string.
rgbaYesCSS rgba() string.
hex_normalizedYesNormalized 6-digit hex string with leading #.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only and idempotent behavior, which aligns with a color conversion. The description adds specific input format details (accepts 3-, 4-, 6-, 8-digit hex with or without #), enhancing transparency without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two short sentences that convey all essential information. No redundant words, making it concise and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description is complete: it explains input format variations and the output type (RGB/RGBA). An output schema exists, so return values are covered. No gaps remain for this straightforward conversion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'hex' is fully described in the schema (100% coverage). The description adds no additional meaning beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: converting hex color to RGB and RGBA channel values. It uses a specific verb ('convert') and resource ('hex color to RGB and RGBA'). This distinguishes it from siblings like 'color-converter' which may be more generic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., color-converter). There are no explicit when-to-use or when-not-to-use instructions, leaving an agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

internal-do-not-callA
Read-onlyIdempotent
Inspect

Internal — do not call. (This is a honeypot; calling it bans your IP.)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description contradicts annotations: annotations set readOnlyHint=true and destructiveHint=false, but description says calling it bans IP (destructive). This is a direct contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, concise and front-loaded with critical warning. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter honeypot tool with no output schema, the description is fully complete: it explains what it is and the consequence of calling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist; description correctly omits parameter details. Baseline score 4 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Internal — do not call' and explains it is a honeypot that bans IPs. This clearly defines its purpose as a trap, distinct from sibling utility tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description directly instructs not to call it and warns of the consequence (IP ban), providing perfect usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

json-formatterA
Read-onlyIdempotent
Inspect

Format JSON: pretty-print with indentation or minify to a single line. Validates the input — invalid JSON returns a clear error pointing at the problem.

ParametersJSON Schema
NameRequiredDescriptionDefault
jsonYesInput JSON text. Must parse as valid JSON.
modeNopretty: indent with spaces (default). minify: strip all whitespace.
indentNoSpaces of indent for pretty mode. Default 2.

Output Schema

ParametersJSON Schema
NameRequiredDescription
modeYesMode used.
outputYesFormatted JSON string.
bytes_inYesLength of the input string.
bytes_outYesLength of the output string.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds validation behavior and error reporting beyond annotations, which already indicate read-only, idempotent, non-destructive traits. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise, front-loaded sentences that directly convey the tool's purpose and key behavior without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple formatting tool, the description covers all essential aspects: formatting options, validation, and error handling. An output schema exists, so return value details are not needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions the two format modes but does not elaborate on the indent parameter beyond what the schema already provides. With 100% schema coverage, description adds marginal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool formats JSON by pretty-printing or minifying, with validation. This distinguishes it from sibling utility tools that serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. Usage is implied by the tool's function, but alternatives among siblings are not addressed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jwt-decoderA
Read-onlyIdempotent
Inspect

Decode a JWT into its header and payload. Does NOT verify the signature — use this for inspection only, never to trust the token's claims.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenYesThe JWT to decode. Standard 3-part base64url-encoded format: header.payload.signature.

Output Schema

ParametersJSON Schema
NameRequiredDescription
headerYesDecoded header JSON.
expiredYesWhether the token has expired, or null if no exp.
payloadYesDecoded payload JSON.
issued_atYesISO timestamp from 'iat' claim, or null.
expires_atYesISO timestamp from 'exp' claim, or null.
not_beforeYesISO timestamp from 'nbf' claim, or null.
signature_presentYesWhether the signature segment is non-empty.
signature_verifiedYesAlways false — this tool does not verify signatures.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds critical behavioral information beyond annotations: it explicitly states that the tool does NOT verify the signature. This, combined with annotations (readOnlyHint, idempotentHint, destructiveHint), gives a full picture of side-effect-free inspection.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the primary action and output. Every word adds value, and the structure is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is complete for a low-complexity tool. It specifies the output (header and payload), warns about no verification, and the presence of an output schema covers return values. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage and describes the token parameter adequately. The description does not add new semantic details beyond implying the token should be a JWT. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Decode', the resource 'JWT', and the output 'header and payload'. It also explicitly distinguishes from verification, ensuring the agent understands this is for inspection only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'use this for inspection only, never to trust the token's claims'. This clearly indicates when to use (inspection) and when not to (for trust), effectively differentiating from verification workflows.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

length-converterA
Read-onlyIdempotent
Inspect

Convert between common length units: millimeter, centimeter, meter, kilometer, inch, foot, yard, mile.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesLength value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the safety profile is clear. The description adds no further behavioral traits beyond listing units, which is already in the schema. Since annotations cover the key behaviors, a score of 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no filler. It conveys the essential information efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations and output schema (present but not provided), the description is mostly complete. It might omit handling of invalid units or edge cases, but for a simple converter, this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters (value, from, to). The description redundantly lists the units but adds no extra semantic meaning beyond the enum values. Thus baseline 3 is correct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Convert' and resource 'length units', listing all supported units (millimeter to mile). It clearly distinguishes from sibling tools like angle-converter or temperature-converter by domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for converting between any two listed length units. While it doesn't exclude other tools, the sibling tools are all for different domains (angle, area, etc.), so the context is clear enough without explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

loan-calculatorA
Read-onlyIdempotent
Inspect

Compute the monthly payment, total interest, and total cost of a fixed-rate amortizing loan. Uses the standard amortization formula. All amounts use the same (unspecified) currency.

ParametersJSON Schema
NameRequiredDescriptionDefault
principalYesLoan amount (principal).
term_yearsYesLoan term in years.
annual_rate_percentYesAnnual interest rate as a percentage (e.g. 6.5 for 6.5%).

Output Schema

ParametersJSON Schema
NameRequiredDescription
monthsYesTerm in months.
principalYesPrincipal, echoed back.
term_yearsYesTerm in years, echoed back.
total_paidYesTotal paid (principal + interest).
total_interestYesTotal interest paid over the life of the loan.
monthly_paymentYesMonthly payment amount.
annual_rate_percentYesAnnual rate, echoed back.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, covering safety. The description adds that it uses the standard amortization formula and currency note, which provides minor extra context but no deeper behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core action, no wasted words. Very concise and efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of loan calculations and the presence of an output schema, the description is mostly complete. It mentions the formula and currency. Minor missing details like compounding frequency or rounding, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all three parameters. The description does not add meaningful new information beyond what the schema already provides, so score is at baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool computes: monthly payment, total interest, and total cost of a fixed-rate amortizing loan. It uses specific verbs and resource, distinguishing it from sibling calculator tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates usage for fixed-rate amortizing loans, but lacks explicit guidance on when not to use or alternatives. However, given the sibling tools are unrelated, this is adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

palindrome-checkerA
Read-onlyIdempotent
Inspect

Test whether a word or phrase is a palindrome. Compares the input lowercased and stripped of non-alphanumeric characters against its reverse.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesWord or phrase to test.

Output Schema

ParametersJSON Schema
NameRequiredDescription
lengthYesLength of the cleaned input.
cleanedYesInput lowercased and stripped of non-alphanumeric characters.
reversedYesThe cleaned input reversed.
is_palindromeYesTrue if the input is a palindrome.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the agent knows it's a safe, idempotent read. The description adds behavioral details beyond annotations: lowercasing and stripping non-alphanumeric characters, which is not in the input schema. This gives the agent a precise understanding of the transformation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose ('Test whether a word or phrase is a palindrome') and then providing the algorithm. No unnecessary words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (one parameter, straightforward logic) and presence of an output schema, the description sufficiently covers the tool's behavior. It explains the normalization and comparison logic, which is all an agent needs to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'text' described as 'Word or phrase to test.' The description adds meaning by explaining the preprocessing steps (lowercasing, stripping non-alphanumeric) and that the input is compared to its reverse, which is not in the schema description. This helps the agent understand the exact semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool tests whether a word or phrase is a palindrome, specifying the verb 'test' and the resource. It also explains the algorithm (lowercase, strip non-alphanumeric, compare to reverse), which distinguishes it from sibling tools like 'reverse-text' or 'case-converter'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool does but does not provide explicit guidance on when to use it versus alternatives. Since sibling tools are all different (e.g., angle-converter, word-counter), confusion is minimal, but no when-to-use or when-not-to-use context is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

password-generatorA
Read-onlyIdempotent
Inspect

Generate cryptographically random passwords. Caller controls length and character classes (uppercase / lowercase / numbers / symbols). Uses crypto.getRandomValues — strong enough for real passwords.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoHow many passwords to generate. Defaults to 1. Max 50.
lengthNoPassword length in characters. Defaults to 16. Min 4, max 128.
numbersNoInclude 0-9. Defaults true.
symbolsNoInclude !@#$%^&*-_=+. Defaults true.
lowercaseNoInclude a-z. Defaults true.
uppercaseNoInclude A-Z. Defaults true.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesHow many passwords were generated.
lengthYesPassword length used.
passwordsYesArray of generated passwords.
pool_sizeYesSize of the character pool the passwords were drawn from.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, non-destructive behavior; description adds the detail that it uses crypto.getRandomValues, confirming strong cryptographic randomness for passwords.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with immediate purpose statement and no extraneous information, perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool, full schema coverage, output schema exists, and annotations cover safety, the description adds the cryptographic guarantee and is fully complete for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the description merely groups parameters (uppercase/lowercase/numbers/symbols) without adding deeper semantic meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Generate cryptographically random passwords' with specification of caller control over character classes, differentiating it from sibling tools like uuid-generator and random-number.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for generating passwords with control over character classes and cryptographic strength, but does not explicitly mention when not to use or compare to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

percentage-calculatorA
Read-onlyIdempotent
Inspect

Calculate percentages three ways: what's X% of Y, what % is X of Y, and what's the % change from X to Y. Use mode='of' for the first form, mode='ratio' for the second, mode='change' for the third.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesFirst number. Meaning depends on mode (see description).
bYesSecond number. Meaning depends on mode.
modeYesWhich percentage operation. 'of' = X% of Y, 'ratio' = X is what % of Y, 'change' = % change from X to Y.
precisionNoDecimal places in the result. Defaults to 2.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYesCalculated result.
formulaYesFormula used to compute the result.
explanationYesHuman-readable description of what was computed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, read-only behavior. The description adds context on the three calculation modes but does not introduce behavioral surprises.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the main purpose and then detail the modes. No extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple, and description, schema, and annotations together fully cover purpose, parameters, and behavior. Output schema exists so return format is documented elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description adds value by explaining the meaning of parameters relative to each mode, going beyond schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool calculates percentages in three distinct modes: 'of', 'ratio', and 'change'. It clearly distinguishes from sibling conversion/calculation tools on the same server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use each mode with examples like 'Use mode='of' for the first form'. It eliminates ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pressure-converterA
Read-onlyIdempotent
Inspect

Convert between pressure units: pascal, kilopascal, bar, psi, atmosphere, mmhg.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesPressure value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states 'Convert' which aligns with the non-destructive nature indicated by annotations. It adds no behavioral detail beyond what annotations already provide (readOnlyHint, destructiveHint, idempotentHint). There is no contradiction, but no added value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, clear sentence front-loads the verb and resource, including a comprehensive unit list. Every word serves a purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a straightforward conversion tool with complete schema coverage and annotations, the description sufficiently names the operations and supported units. Output schema handles return format, and no additional context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description lists the units already enumerated in the schema, adding no additional semantic context beyond the schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts between pressure units, listing all supported units explicitly. It distinctly separates pressure-converter from sibling tools like angle-converter or temperature-converter, which serve different domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit usage guidance such as when to use or avoid. However, the tool's purpose is self-explanatory and siblings are distinct converters, so no direct competition. The annotations (readOnlyHint, idempotentHint) hint at safe use, but the description could include a brief note on appropriate scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

prime-checkerA
Read-onlyIdempotent
Inspect

Test whether an integer ≥ 2 is prime. Returns the verdict plus the prime factorization (useful for composite numbers).

ParametersJSON Schema
NameRequiredDescriptionDefault
nYesThe integer to test. Must be ≥ 2.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nYesInput number, echoed back.
is_primeYesTrue if n is prime.
factor_countYesNumber of prime factors (with multiplicity).
prime_factorsYesPrime factorization of n.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds that it returns both a primality verdict and prime factorization, providing extra behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose and concisely adds the return value details. Every word contributes meaning; no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple, deterministic tool with a single parameter and an existing output schema, the description fully covers the necessary context: input constraint, behavior, and output content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides a comprehensive description of parameter n (type, constraints, and purpose) with 100% coverage. The tool description does not add any further semantic meaning to the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('test'), the resource ('integer ≥ 2'), and the purpose ('is prime'). It also specifies the output (verdict and prime factorization), distinguishing it from sibling tools which are unrelated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly defines the usage context by specifying the input constraint (integer ≥ 2) and the nature of the output. However, it does not explicitly state when to use or avoid using the tool, though no alternative prime-checking tools exist among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random-numberA
Read-onlyIdempotent
Inspect

Generate one or more random numbers in [min, max]. By default returns integers; pass integer=false for floats. Uses Math.random() (not crypto-strong).

ParametersJSON Schema
NameRequiredDescriptionDefault
maxYesInclusive upper bound.
minYesInclusive lower bound.
countNoHow many numbers to generate. Defaults to 1. Max 1000 per call.
integerNoIf true, returns integers only. Defaults to true.

Output Schema

ParametersJSON Schema
NameRequiredDescription
maxYesUpper bound used (inclusive).
minYesLower bound used (inclusive).
countYesHow many numbers were generated.
integerYesWhether integers were requested.
numbersYesArray of generated numbers.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds important behavioral info beyond annotations: mentions non-crypto-strong nature and default integer behavior. Annotations already declare readOnlyHint and idempotentHint, so description complements well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences covering purpose, defaults, and limitation. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema exists, so return values not needed. Description covers input bounds, type, count limit, and cryptographic weakness – complete for this simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% and includes defaults for count and integer. Description adds minimal extra: clarifies [min, max] inclusive and reinforces max count per call. Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'generate' and resource 'random numbers' with bounds [min, max] and integer/float option. Unambiguous and distinct from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use for non-cryptographic randomness via 'Math.random() (not crypto-strong)', but does not explicitly state when to use vs alternatives like password-generator or uuid-generator.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

regex-testerA
Read-onlyIdempotent
Inspect

Test a JavaScript regex against text. Returns all matches with their positions, captured groups, and named groups. Defaults to global flag so all occurrences are found.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to test against.
flagsNoRegex flags. Allowed: g, i, m, s, u, y. Defaults to 'g'.
patternYesThe regex pattern, without delimiters. Standard JavaScript regex syntax.

Output Schema

ParametersJSON Schema
NameRequiredDescription
flagsYesFlags used.
matchesYesFirst 100 matches with positions and captured groups.
patternYesRegex pattern, echoed back.
truncatedYesTrue if more than 100 matches were truncated.
match_countYesTotal number of matches found.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint (true) and destructiveHint (false). The description adds context about default global behavior and output details (positions, groups) beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no unnecessary words, front-loaded with purpose. Every sentence provides useful information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (inferred), the description adequately covers the main behavior: output includes matches, positions, groups. It does not cover error handling, but that is acceptable for a utility tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 3 parameters. The description adds value by clarifying the default 'g' flag behavior and the nature of the operation. This exceeds the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Test a JavaScript regex against text' and specifies the output includes matches, positions, captured groups, and named groups. This is a specific verb-resource combination that distinguishes it from sibling tools like converters and calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that default global flag finds all occurrences, which guides usage for exhaustive matching. It does not explicitly exclude alternatives, but given unrelated siblings, this is sufficient. Could be improved by mentioning when to use flags parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rem-to-pxA
Read-onlyIdempotent
Inspect

Convert REM (relative to root font size) to pixels. Defaults to a 16px base (browser default); pass base_px to use a different root size.

ParametersJSON Schema
NameRequiredDescriptionDefault
remYesThe REM value to convert.
base_pxNoBase font size in pixels. Defaults to 16 (browser default).

Output Schema

ParametersJSON Schema
NameRequiredDescription
remYesInput REM value, echoed back.
pixelsYesEquivalent pixel value.
base_pxYesBase font size used.
em_equivalentYesEquivalent em value (same as rem in this context).
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds practical context (default 16px base, optional override) without contradicting structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the purpose and key detail (default base). No superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool with a full input schema and an output schema (not shown but present), the description covers all needed context: purpose, default, and optional parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters have descriptions). The description reiterates the default base but adds no new meaning beyond what the schema already provides. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and the resource 'REM to pixels', specifying the default base and the optional base_px parameter. It distinguishes itself from sibling converters (e.g., length-converter) by focusing on REM units.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates when to use (when needing REM->PX conversion) but does not explicitly contrast with alternatives. However, given the narrow focus, the context is clear enough for an agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reverse-textA
Read-onlyIdempotent
Inspect

Reverse a string character-by-character. Unicode-aware — handles emoji and combining characters correctly using Array.from on the iterator.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to reverse.

Output Schema

ParametersJSON Schema
NameRequiredDescription
outputYesReversed text, Unicode-safe.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds details about Unicode handling and internal implementation (Array.from), providing extra transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core action and key feature (Unicode-aware), no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple text reversal tool, the description covers behavior, the schema fully documents the parameter, and an output schema exists. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with a description for the parameter. The tool description further explains the reversal is character-by-character and Unicode-aware, adding meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it reverses a string character-by-character and is Unicode-aware, which is specific and distinguishes it from sibling tools that are converters or calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear context for when to use it (reverse text), and sibling tools are unrelated, but no explicit exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

salary-to-hourlyA
Read-onlyIdempotent
Inspect

Convert an annual salary to hourly, daily, weekly, and monthly equivalents. Defaults to a standard 40-hour, 52-week schedule.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_salaryYesAnnual gross salary in the same (unspecified) currency as the output.
hours_per_weekNoWorking hours per week. Defaults to 40.
weeks_per_yearNoWorking weeks per year. Defaults to 52.

Output Schema

ParametersJSON Schema
NameRequiredDescription
annualYesAnnual rate, echoed back.
hourlyYesHourly rate.
weeklyYesWeekly rate.
monthlyYesMonthly rate.
daily_8hYesDaily rate at 8 hours/day.
hours_per_weekYesHours per week used.
weeks_per_yearYesWeeks per year used.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive. The description adds default schedule (40-hour, 52-week) and output types (hourly, daily, weekly, monthly), which provides useful context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence directly states purpose and defaults. No unnecessary words; optimal length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With output schema present, description doesn't need to detail return values. It covers conversion types and defaults. Param descriptions handle other details. Complete for this simple conversion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear descriptions. The description adds context about the output (daily and monthly equivalents) that isn't in param descriptions, raising it above baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it converts an annual salary to hourly, daily, weekly, and monthly equivalents, with clear verb 'Convert'. It distinguishes from sibling converters by specifying salary conversion, which no other sibling does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit instructions on when to use this tool versus alternatives. The description implies its use for salary conversion but doesn't provide when-not or contrast with similar tools like tip-calculator. Given its clear domain, it scores at the middle.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scientific-notationA
Read-onlyIdempotent
Inspect

Convert a number to scientific and engineering notation. Returns the coefficient, exponent, and several human-readable forms.

ParametersJSON Schema
NameRequiredDescriptionDefault
nYesThe number to convert. Accepts standard or already-in-scientific-notation values.
precisionNoSignificant digits in the coefficient. Defaults to 4.

Output Schema

ParametersJSON Schema
NameRequiredDescription
exponentYesThe exponent part.
standardYesStandard decimal form.
scientificYesNumber in scientific notation (e.g. '4.5 × 10^-5').
coefficientYesThe coefficient part.
engineeringYesEngineering notation form.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true, signaling safety. The description adds return types but no additional behavioral details such as error handling or edge cases. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the primary action and outputs. Every word is necessary and no filler is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity and full schema coverage with annotations, the description adequately explains the return values. It lacks explanation of edge cases (e.g., zero) but is otherwise complete with the presence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, both parameters have descriptions. The tool description adds no further meaning beyond what the schema provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts numbers to scientific and engineering notation, specifying the returned outputs (coefficient, exponent, human-readable forms). This verb+resource description distinguishes it from sibling tools like angle-converter or fraction-simplifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for numeric notation conversion but does not explicitly state when to use it versus alternatives, nor does it provide exclusions or preconditions. The context of sibling converter/calculator tools makes the purpose clear, but no direct guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sha256-generatorA
Read-onlyIdempotent
Inspect

Compute the SHA-256 hash of a text string. Returns the digest as a 64-character lowercase hex string. UTF-8 encoded.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to hash. UTF-8 encoded before hashing.

Output Schema

ParametersJSON Schema
NameRequiredDescription
hexYes64-character lowercase hex digest.
algorithmYesHash algorithm name (always 'SHA-256').
bytes_hashedYesNumber of bytes that were hashed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive. Description adds UTF-8 encoding and hex output format, enhancing transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no wasted words. Core information front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-param tool with clear annotations, the description fully covers purpose, input encoding, and output format. No output schema needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameter with description. Tool description adds encoding detail but does not add significant new semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action (compute), resource (SHA-256 hash of text), and output (64-character hex string). It uniquely identifies the tool among siblings like angle-converter or word-counter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies use when a SHA-256 hash is needed. No explicit exclusion, but siblings are diverse so context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sort-linesA
Read-onlyIdempotent
Inspect

Sort the lines of a text block. Options: ascending or descending, optional dedupe, optional case-insensitive compare.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesMulti-line text. Each line is a separate sort key.
uniqueNoIf true, dedupe lines as part of the sort. Default false.
directionNoSort direction. Default asc.
case_insensitiveNoIf true, compare case-insensitively. Default false.

Output Schema

ParametersJSON Schema
NameRequiredDescription
outputYesSorted text with newline-joined lines.
line_countYesNumber of lines in the output.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. Description adds option details but no additional behavioral traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 11 words, front-loads the verb and noun. No waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple sorting tool with full schema coverage and an output schema, the description covers all necessary aspects: action, options, and constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. Description lists options (ascending/descending, dedupe, case-insensitive) which matches schema properties, but adds little meaning beyond the schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Sort the lines of a text block' with specific options (ascending/descending, dedupe, case-insensitive). Distinct from sibling 'dedup-lines' which only deduplicates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use over siblings like 'dedup-lines'. The purpose is clear, but the description lacks explicit when-to-use or when-not-to-use advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

speed-converterA
Read-onlyIdempotent
Inspect

Convert between speed units: km/h, m/s, mph, knots, ft/s.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit: km/h, m/s, mph, knots, ft/s.
valueYesSpeed value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds no behavioral traits beyond the conversion operation, but the annotations carry the burden of transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence (12 words) that front-loads the action and resource, with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With full schema coverage, annotations, and an existing output schema, the description provides all necessary context: the conversion action and the complete list of units. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for all three parameters. The description lists the units but does not add meaningful information beyond what is in the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Convert') and the resource ('speed units'), listing all supported units (km/h, m/s, mph, knots, ft/s), making it distinct from sibling conversion tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool or when not to, nor does it mention alternatives. Usage is implied by the domain (speed) and sibling context, but no direct guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

square-root-calculatorA
Read-onlyIdempotent
Inspect

Compute square root, cube root, square, and cube of a number. Returns all four so callers don't have to call four tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
nYesThe input number. Must be non-negative for the square root branch.

Output Schema

ParametersJSON Schema
NameRequiredDescription
cubeYes
squareYes
cube_rootYes∛n
square_rootYes√n
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, covering safety. The description adds that the tool returns all four computed values, which is behavioral context beyond annotations. However, it doesn't discuss edge cases like negative inputs, though the schema addresses that.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences: first stating the action, second explaining the benefit. No wasted words, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists (so return values are documented), the description is complete. It covers purpose, benefit, and the tool's composite nature. Annotations and schema handle the rest.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with the schema already describing parameter 'n' (non-negative for square root). The description does not add further semantics; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes square root, cube root, square, and cube of a number. This specific verb+resource combination distinguishes it from sibling tools, especially by noting it returns all four operations in one call, avoiding multiple tool invocations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the tool is for when you need any of these four operations, with the bonus of avoiding multiple calls. While it doesn't explicitly state when not to use, the context of sibling tools (none provide these individual operations) makes the usage clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

temperature-converterA
Read-onlyIdempotent
Inspect

Convert between Celsius, Fahrenheit, and Kelvin. Pass value + from + to.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesThe temperature value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint and idempotentHint, so the description adds minimal behavioral info. It doesn't detail output format or edge cases, but annotations cover safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. Information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, full annotations, and output schema, the description is adequate. It could mention precision or return format, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents parameters. The description does not add any extra meaning beyond 'Convert between...', thus baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the function: converting between Celsius, Fahrenheit, and Kelvin. This clearly distinguishes it from sibling tools like angle-converter or length-converter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description instructs to 'Pass value + from + to', which implies basic usage. While it doesn't explicitly state when not to use, the tool's name and context among other converters make alternatives obvious.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tip-calculatorA
Read-onlyIdempotent
Inspect

Compute the tip, total, and per-person split for a bill. All amounts use the same (unspecified) currency — the engine doesn't care about currency codes.

ParametersJSON Schema
NameRequiredDescriptionDefault
billYesPre-tip bill amount, in the same currency as the output.
splitNoNumber of people splitting the bill. Defaults to 1.
tip_percentYesTip percentage (e.g. 18 for 18%).

Output Schema

ParametersJSON Schema
NameRequiredDescription
tipYesTip amount.
billYesPre-tip bill amount.
splitYesNumber of people the bill is split between.
totalYesTotal bill including tip.
per_person_tipYesEach person's share of the tip.
per_person_totalYesEach person's share of the total.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true. Description adds value by noting the engine is currency-agnostic ('doesn't care about currency codes'), which is a key behavioral trait. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, front-loaded with purpose, then a clarifying note. No wasted words; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator with an output schema, the description covers the core functionality and critical behavior (currency agnostic). No gaps given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description does not add significant meaning beyond schema; it reinforces that all amounts use same currency, but that is already implied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it computes tip, total, and per-person split for a bill. Specific verb+resource distinguishes it from sibling tools like 'percentage-calculator' and 'discount-calculator'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs. alternatives such as 'percentage-calculator' or 'loan-calculator'. The description only states what it computes, not in what scenarios it is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

url-encoderA
Read-onlyIdempotent
Inspect

URL-encode or URL-decode a string. Uses RFC 3986 component encoding (encodeURIComponent semantics).

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesencode: text → URL-encoded. decode: URL-encoded → text.
inputYesInput string.

Output Schema

ParametersJSON Schema
NameRequiredDescription
modeYesMode used, echoed back.
outputYesThe encoded or decoded result.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds the significant behavioral detail that encoding follows 'RFC 3986 component encoding (encodeURIComponent semantics)', which clarifies exactly how the encoding works. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys all essential information: the tool's purpose and the encoding standard. It is front-loaded with the key action and resource, leaving no room for redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (encode/decode with string input and output schema), the description fully covers the necessary context. It states both modes and the encoding standard, which is sufficient for correct usage. The presence of an output schema means return values are already documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides clear descriptions for both parameters (mode with enum, input with maxLength). The description adds the encoding standard context, further clarifying the behavior of the mode parameter. With 100% schema coverage, the description complements well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'URL-encode or URL-decode a string' with the specific verb and resource. It also specifies 'RFC 3986 component encoding (encodeURIComponent semantics)', which distinguishes it from any other URL-related tools like url-parser. The tool name matches the purpose exactly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for encoding/decoding strings for URLs, and the sibling tools context provides alternatives (e.g., base64, jwt-decoder). However, it does not explicitly state when not to use this tool or mention specific alternatives, so it lacks complete exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

url-parserA
Read-onlyIdempotent
Inspect

Parse a URL into its components: protocol, host, port, path, query parameters, hash. Returns the query string parsed into a key-value map.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to parse. Must include the protocol (http:// or https://).

Output Schema

ParametersJSON Schema
NameRequiredDescription
hashYesFragment identifier including leading '#', or empty.
hostYesHostname plus port if present.
pathYesPathname including leading slash.
portYesPort number as a string, or null.
queryYesRaw query string including leading '?', or empty.
originYesOrigin (protocol + host).
paramsYesParsed query parameters.
hostnameYesHostname only.
protocolYesURL protocol without trailing colon (e.g. 'https').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, providing a safe read operation profile. The description adds that it returns components, but does not disclose error handling for malformed URLs or partial inputs. Without additional behavioral context, the description meets minimum requirements but lacks depth for edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Directly states purpose and output. Front-loaded with action verb and resource. Each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no nested objects, clear annotations), the description is adequate. It explains what the tool does and the output format. Minor omission: no mention of return type structure beyond query map, but output schema likely covers this. Score 4 for being nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%: the single 'url' parameter is described as 'The URL to parse. Must include the protocol (http:// or https://).' The tool description reinforces that parsing produces components, but does not add new semantics beyond the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: parsing a URL into components (protocol, host, port, path, query parameters, hash) and returning query string as key-value map. This is a specific verb+resource that distinguishes it from sibling utilities like url-encoder or other converters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives, nor does it provide exclusions or prerequisites. However, the purpose is straightforward, and the intended use is implied by the description of parsing URL components.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uuid-generatorA
Read-onlyIdempotent
Inspect

Generate one or more RFC 4122 v4 UUIDs (random). Returns an array of strings, even for a single UUID, so callers don't have to special-case length.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoHow many UUIDs to generate. Defaults to 1. Max 100 per call.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesHow many UUIDs were generated.
uuidsYesArray of generated UUIDs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds value by specifying the return format (array of strings) and the behavior for single UUIDs, which is beyond what annotations provide. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no redundant information. Every sentence adds value: first states purpose, second clarifies return type and behavior. Very concise and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, strong annotations, and an output schema exists), the description is complete. It covers purpose, return format, and guidance on parameter (implied by 'one or more'), leaving no ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter 'count' is fully described with min, max, and default. The description does not add additional meaning or constraints beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates RFC 4122 v4 UUIDs (random) and returns an array of strings. It distinguishes itself from siblings like random-number or password-generator by specifying the UUID standard and return format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that it generates one or more UUIDs and that the return is always an array, which guides usage. However, it does not explicitly mention when not to use it or provide alternatives, but the context of simple utility tools makes the usage clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

volume-converterA
Read-onlyIdempotent
Inspect

Convert between volume units: liter, milliliter, cubic-meter, gallon-us, quart-us, pint-us, fluid-ounce-us, cup-us.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesVolume value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds no extra behavioral context beyond the conversion purpose, so it meets the baseline but adds no additional value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that directly conveys the tool's purpose and supported units with no redundant information. It is appropriately front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, annotations covering safety, complete schema with descriptions, and an output schema, the description provides sufficient information for an agent to correctly invoke the tool. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for each parameter. The description lists the units already present in the schema enums, adding no meaningful extra meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it converts volume units and lists all supported units, making the verb and resource specific. The unit differentiation is sufficient given sibling tools are for other quantities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit when-to-use or when-not-to-use guidance. Usage is implied by the tool's purpose, but no explicit alternatives or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vowel-counterA
Read-onlyIdempotent
Inspect

Count vowels and consonants in text. ASCII-only — handles English-style letters. Returns vowel/consonant counts and the vowel ratio.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to analyze.

Output Schema

ParametersJSON Schema
NameRequiredDescription
wordsYesWord count.
vowelsYesVowel count.
lettersYesTotal letters (vowels + consonants).
consonantsYesConsonant count.
vowel_ratio_percentYesVowels as a percentage of letters.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. Description adds ASCII-only constraint and specifies exact outputs (counts and ratio). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-loading key information: what it does, constraints, and outputs. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With output schema present, the description covers purpose, input constraints, and return values. No missing elements for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'text' described as 'Text to analyze.' Description doesn't significantly add beyond schema, but clarifies ASCII-only handling. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it counts vowels and consonants, distinct from sibling tools like character-counter or word-counter. Specific verb 'Count' with clear resource 'vowels and consonants in text'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states ASCII-only and handles English-style letters, implying usage for English text and non-use for non-ASCII. Could be stronger by directly mentioning when not to use, but still clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

weight-converterA
Read-onlyIdempotent
Inspect

Convert between common weight units: milligram, gram, kilogram, ounce, pound, stone, ton.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit.
fromYesSource unit.
valueYesWeight value to convert.

Output Schema

ParametersJSON Schema
NameRequiredDescription
toYesTarget unit, echoed back.
fromYesSource unit, echoed back.
valueYesInput value, echoed back.
resultYesConverted value in the target unit.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the description need not repeat those. However, the description adds no additional behavioral context (e.g., precision, rounding, or error handling) beyond the basic conversion action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the action ('Convert between common weight units') and immediately lists the supported units. Every word is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, full schema coverage, output schema presence, and informative annotations, the description provides sufficient context for an agent to correctly invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents the three parameters (value, from, to). The description lists the allowed units, which mirrors the enum values in the schema, adding marginal semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it converts between common weight units and lists the specific units (milligram, gram, kilogram, ounce, pound, stone, ton). This distinguishes it from sibling tools like length-converter or temperature-converter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While the name and description imply use for weight conversion, there are no explicit instructions about prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

whitespace-removerA
Read-onlyIdempotent
Inspect

Remove or collapse whitespace in text. Modes: trim (edges only), collapse (also fold internal runs to one space), all (strip every whitespace character).

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNotrim: leading/trailing only. collapse: also collapse internal runs to one space. all: remove every whitespace character. Default: collapse.
textYesText to clean.

Output Schema

ParametersJSON Schema
NameRequiredDescription
modeYesMode used, echoed back.
outputYesCleaned text.
removedYesNumber of characters removed.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and non-destructive behavior. The description adds specifics on modes and their effects, providing additional behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with a list of modes, front-loaded with purpose. No wasted words; highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple text transformation tool, the description combined with input schema fully specifies behavior. Output schema exists but is not needed to explain return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with descriptions (100% coverage). The description adds the default value for mode, which is not in the schema, adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool removes or collapses whitespace, with specific verbs and resource. It distinguishes from siblings as a unique text utility.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains each mode and when to use them, but does not explicitly state when not to use this tool or mention sibling alternatives. However, given the distinct purpose, it's clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

word-counterA
Read-onlyIdempotent
Inspect

Count words, characters, sentences, paragraphs, and reading time in a block of text. Words are Unicode-aware (handles non-Latin scripts). Reading time assumes 240 wpm.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to analyze. Capped at 60,000 characters. Larger inputs should be split client-side.

Output Schema

ParametersJSON Schema
NameRequiredDescription
wordsYesWord count.
sentencesYesSentence count.
charactersYesCharacter count including spaces.
paragraphsYesParagraph count.
reading_minutesYesEstimated reading time in minutes at 240 wpm.
characters_no_spacesYesCharacter count excluding spaces.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows it's a safe, idempotent read operation. The description adds that it is Unicode-aware and reading time assumes 240 wpm, but beyond that, no additional behavioral traits are disclosed. The description does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the key counts. Every sentence adds necessary detail (Unicode-awareness and reading time assumption). No unnecessary words. It is appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, output schema exists), the description covers all essential aspects: what it counts, assumptions for reading time, and character limits. The description is complete for an agent to understand the tool's function and usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter 'text' is fully described in the input schema (maxLength, description about splitting). The description adds value by listing what metrics are computed (words, characters, sentences, paragraphs, reading time), which is not in the schema. This helps the agent understand how the input is processed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool counts words, characters, sentences, paragraphs, and reading time in a block of text. It distinguishes itself from sibling tools (mostly converters and calculators) by being a text analysis tool. The verb 'count' and the listed resources clearly define its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly indicates what the tool does, making its usage context apparent. However, it does not explicitly differentiate from the sibling 'character-counter' tool, which may also count characters. Although the description covers more metrics, lack of explicit guidance on when to choose this over a similar tool slightly reduces the score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources