Skip to main content
Glama

Server Details

The most comprehensive everyday calculator MCP server — 501 tools across 22 categories covering 8 countries' tax systems (FR, BE, CH, CA, US, UK, MA, SN). Finance, health, math, science, construction, conversions, education, sport, cooking, travel, and more. Free, no API key required. Streamable HTTP transport.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 3.1/5 across 446 of 446 tools scored. Lowest: 1.7/5.

Server CoherenceB
Disambiguation2/5

Many tools have overlapping purposes, causing significant ambiguity. For example, 'calculate_bra_size' and 'calculate_bra_size_convert' both handle bra sizes, 'calculate_loan_payment' and 'calculate_mortgage' both compute loan payments, and 'calculate_volume' overlaps with specific shape calculators like 'calculate_sphere'. While descriptions help, the sheer number of similar tools makes it difficult for an agent to reliably choose the correct one without confusion.

Naming Consistency5/5

Tool names follow a highly consistent pattern throughout. Almost all tools use a 'calculate_' prefix followed by a descriptive noun phrase (e.g., 'calculate_bmi', 'calculate_french_income_tax'), with a few 'convert_' and 'list_'/'get_' tools maintaining similar clarity. This uniformity makes the tool set predictable and easy to navigate despite its size.

Tool Count1/5

With 446 tools, the count is extremely excessive for a calculator server, far beyond a reasonable scope. This overwhelms the tool surface, making it impractical for agents to efficiently discover or use tools. A well-scoped calculator might have 20-50 tools; this server's bloated count detracts from coherence and usability.

Completeness5/5

The tool set is remarkably complete, covering an extensive range of domains from finance and fitness to cooking and construction. There are no obvious gaps; for each domain, tools provide comprehensive calculations (e.g., tax calculators for multiple countries, unit converters for various metrics). The inclusion of 'calculate_anything' as a fallback further ensures coverage for edge cases.

Available Tools

446 tools
calculate_1rm_tableBInspect

Generate a full 1RM-to-12RM repetition table from a known lift using Epley formula. Returns: {input_weight, input_reps, estimated_1rm}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
repsYesRepetitions performed at that weight
weightYesWeight lifted in kg or lbs

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full behavioral burden. It mentions the Epley formula but does not disclose limitations, edge cases, or what the output table includes (e.g., number of rows, units, formatting). This is insufficient for a tool that returns a table without an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 14 words, front-loaded with the verb 'Generate'. Every word contributes meaning, and there is no extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should explain what the generated table contains (e.g., rows for each rep range, whether values are numeric). It does not. Additionally, no annotations are present, leaving agents uninformed about safe usage or return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters, so the baseline is 3. The description does not add any additional meaning beyond the schema; it merely refers to 'a known lift' without elaborating on the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a 1RM-to-12RM repetition table using the Epley formula. It identifies the specific verb 'Generate' and resource 'repeat table', but does not explicitly distinguish from the sibling 'calculate_one_rep_max', which likely computes only a single 1RM value.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is used when you have a known lift (weight and reps), but provides no explicit guidance on when to use this versus alternative tools like 'calculate_one_rep_max'. No when-not or alternative descriptions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_add_hoursBInspect

Add two time durations and return the total in hours and minutes. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
hours1YesFirst duration — hours
hours2YesSecond duration — hours
minutes1YesFirst duration — minutes
minutes2YesSecond duration — minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavioral details. It does not mention how overflow (e.g., minutes sum > 59) is handled, whether rounding occurs, or if there are any side effects. The statement 'return the total in hours and minutes' implies proper conversion, but lacks explicit assurance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that immediately communicates the core function and return value. There is no extraneous information, and every word is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple arithmetic tool with fully described parameters, the description covers the essential purpose and output. However, it lacks information about error conditions or edge cases, but given the tool's simplicity, it is nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so each parameter already provides meaning (e.g., 'First duration — hours'). The description adds no additional parameter semantics beyond what the schema contains, which is acceptable at the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Add two time durations') and the output ('return the total in hours and minutes'), which is specific and unambiguous. It effectively distinguishes itself from the many 'calculate_*' siblings by focusing on a simple arithmetic operation on time durations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor any prerequisites or exclusions. For example, it does not mention that the tool is for adding only two durations or that it expects positive values (though schema enforces this). The absence of usage context limits the agent's ability to decide when this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ageBInspect

Calculate exact age in years, months and days from a birth date. Returns: {today, age_years, age_months, age_days, total_days_lived, days_to_next_birthday}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
birth_dateYesYYYY-MM-DD — Date of birth

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description only restates the purpose. It does not disclose any behavioral traits such as handling of invalid dates, timezone assumptions, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 10 words, no wasted wording. Perfectly concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool, the description is adequate but could mention edge cases or output details. With no output schema, a brief note on return format would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for birth_date. The description reinforces the format but adds no new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies the action ('calculate'), the resource ('exact age'), and the output format ('in years, months and days'), which helps differentiate it from similar siblings like calculate_age_in_units.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no exclusions, and no context about prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_age_in_unitsCInspect

Calculate exact age in multiple units from birth date. Returns: {weeks, hours, minutes, seconds}. See list_bundles for related 'fun' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
birth_dateYesBirth date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It mentions 'exact age in multiple units' but does not specify which units (e.g., years, months, days), how edge cases like leap years are handled, or whether time zones are considered. The behavioral insight is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence without unnecessary words. It could be slightly more detailed, but it is efficient and front-loads the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description should explain the return format (e.g., which units are included). It also omits error handling (e.g., invalid dates) and time zone handling. The tool's simplicity mitigates this, but important details are missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for one parameter, with a clear description of format. The tool description does not add new meaning to the parameter beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates exact age from a birth date, with the added detail of multiple units, which distinguishes it from simpler age calculators like 'calculate_age'. However, it doesn't explicitly differentiate from siblings or mention alternative tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., 'calculate_age' for years only). There is no mention of prerequisites, context, or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_alcohol_unitsAInspect

Compute alcohol units (UK) and grams of pure alcohol in a drink. Use for health tracking and limit awareness. Inputs: volume mL, ABV %. Returns UK units, grams of pure alcohol, and standard-drink equivalent. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
drinksYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It indicates a non-destructive calculation and comparison to a limit, but does not explicitly state read-only nature, output format, or error handling. The behavioral context is partially transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, front-loading the action and purpose. No unnecessary information, every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should hint at the return value. It mentions comparison but not whether output is text, number, or object. Lacks enough detail for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the schema itself provides clear structure (required fields, enums, constraints). The description adds no additional meaning beyond saying 'from drinks', so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates total alcohol units from drinks and compares to UK weekly limit, using a specific verb and resource. It distinguishes itself from siblings like calculate_blood_alcohol by focusing on UK units.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for computing alcohol units per UK guidelines, but does not explicitly state when to use it versus alternatives like calculate_bac_points or calculate_blood_alcohol. Usage is implied by the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_altitude_sicknessAInspect

Assess altitude sickness risk and recommend acclimatization schedule. Returns: {risk_level, risk_color, recommended_acclimatization_days, symptoms_to_watch, recommendations}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
altitude_mYesTarget altitude in meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates it assesses risk and recommends a schedule but doesn't disclose factors beyond altitude, limitations, or output format. Without annotations, more behavioral context would be beneficial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the main purpose with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter, the description covers the main functionality. However, lacking an output schema or disclaimer about medical advice slightly reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'altitude_m'. The description adds no additional meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function with specific verbs: 'assess altitude sickness risk' and 'recommend acclimatization schedule', distinguishing it from other calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, such as other health-related calculators. The description implies use for altitude sickness but lacks context or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_angle_convertAInspect

Convert angles between degrees, radians, gradians and turns. Returns: {original}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesAngle value
to_unitYesTarget unit
from_unitYesSource unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It simply states the conversion action but does not disclose any edge case handling (e.g., negative values, precision, rounding) or return format. For a simple conversion, a score of 3 is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words. It is concise and front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple unit conversion tool, the description captures the essence. However, it lacks mention of the output format (e.g., returns a number), but given the simplicity, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with clear descriptions and enum values for units. The description adds no additional meaning beyond what the schema already provides, thus baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Convert), the resource (angles), and the specific units (degrees, radians, gradians, turns). This is specific and distinguishes it from sibling tools like convert_angle or other conversion tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other angle-related tools or general converters. It only states what it does, without context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_annuity_paymentBInspect

Calculate periodic payment amount for a loan or annuity. Returns: {monthly_payment_eur, total_paid_eur, total_interest_eur}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rateYesAnnual interest rate percent
periodsYesNumber of payment periods (months)
principalYesPrincipal amount EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It fails to disclose key behavioral traits such as the type of annuity (ordinary vs. due), compounding frequency, rounding, or assumptions. This lack of detail limits the agent's understanding of the calculation's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose. It is front-loaded with no wasted words, but could benefit from slight expansion without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema or annotations, the description should compensate by explaining the return value, edge cases, or assumptions. It is too brief for a financial calculation tool, leaving the agent uncertain about output format and calculation details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with basic descriptions for each parameter (e.g., 'Annual interest rate percent'). The description adds no additional meaning beyond the schema, which is already adequate. Baseline 3 is appropriate per the rubric.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'periodic payment amount for a loan or annuity'. It is specific and distinct from the many other 'calculate_' sibling tools, such as 'calculate_loan_payment' or 'calculate_mortgage', though it does not explicitly differentiate itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_loan_payment' or 'calculate_mortgage'. The description does not provide any context for choosing this tool over similar siblings or mention any exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_anythingAInspect

Universal AI-powered calculator — handles any calculation not covered by specialized tools. Requires premium subscription.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCalculation request in natural language (English or French)
contextNoOptional context: units, constraints, domain

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states it is an AI-powered calculator and requires premium subscription. It does not disclose how it processes queries, what it returns, supported languages (only mentioned in schema), error handling, or any limitations beyond subscription.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: first states purpose and scope, second states a requirement. It is concise, front-loaded, and contains no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (many siblings, no output schema, no annotations), the description provides adequate scope and a key constraint but lacks details on return format, limitations, or explicit guidance to check siblings. The output schema is absent, so the agent cannot infer return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning to the parameters beyond what the schema already provides for 'query' and 'context'. No mention of format, constraints, or special behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a universal calculator for any calculation not covered by specialized tools, which distinguishes it from the many sibling tools. The verb 'handles' and resource 'any calculation' are specific, and the phrase 'not covered by specialized tools' sets clear boundaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use this tool when a calculation is not covered by specialized tools, implying the agent should check siblings first. It also mentions the premium subscription requirement. However, it does not explicitly list alternatives or scenarios to avoid.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_aquarium_volumeBInspect

Compute aquarium water volume in L and US gallons. Use for fishkeeping, dosing, and stocking decisions. Inputs: shape (rectangular/cylindrical/bow-front), L×W×H or radius×height in cm. Returns liters and gallons. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
width_cmYes
height_cmYes
length_cmYes
substrate_cmNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool calculates volumes and capacity, implying a read-only operation. However, it does not disclose any specific behavioral traits such as input units, output format, or potential inaccuracies. Since the nature of 'calculate' is generally safe, this minimal transparency is acceptable but not exemplary.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It front-loads the core purpose and is appropriately sized for a simple tool. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the parameter count (4), no output schema, and no annotations, the description is too sparse. It omits details on return values, calculation method, and units. The agent lacks enough context to fully understand what the tool produces, especially because 'stocking capacity' is not elaborated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema coverage is 0% (no parameter descriptions in the schema). The description does not explain any parameters, leaving the agent to rely on parameter names (length_cm, width_cm, height_cm, substrate_cm). While names are somewhat self-explanatory, 'substrate_cm' might be ambiguous (thickness of substrate). The description fails to add meaning beyond the names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and the resource 'aquarium gross/net volume and stocking capacity'. It distinguishes this tool from sibling tools which cover many other calculation domains. However, it could be more precise about what 'stocking capacity' entails, so not a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The sibling list is extensive but all are 'calculate_*' for different subjects; the description does not provide any context or exclusions, leaving the agent to infer usage solely from the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_areaCInspect

Compute area for common 2D shapes (rectangle, triangle, circle, trapezoid, etc.). Use for geometry, real estate, or paint estimates. Inputs: shape + dimensions. Returns area in input-units squared. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
d1NoDiagonal 1 for rhombus
d2NoDiagonal 2 for rhombus
sideNoSide for hexagon
shapeYesShape type
widthNoWidth
heightNoHeight
lengthNoLength or base
radiusNoRadius
semi_majorNoSemi-major axis for ellipse
semi_minorNoSemi-minor axis for ellipse

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only says 'Calculate area' without mentioning return format, error handling, side effects, or whether the tool is read-only. This is insufficient for a tool with 10 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no wasted words. However, it could be structured to list the supported shapes for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 10 parameters supporting multiple shapes and no output schema, the description should explain that different shapes require different parameters. The current description provides no such context, making it incomplete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so all parameters are documented in the input schema. The description does not add meaning beyond what the schema provides, but baseline is 3 given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool calculates area for geometric shapes. The verb 'calculate' and resource 'area' are specific. However, it does not differentiate from sibling tools like calculate_rectangle or calculate_cylinder, which might lead to confusion about which tool to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus shape-specific calculators (e.g., calculate_rectangle). The description does not mention context, prerequisites, or alternatives, leaving the agent to infer usage based on name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_auto_entrepreneurCInspect

Calculate French auto-entrepreneur (micro-enterprise) net income and social charges. Returns: {social_charges_rate_pct, social_charges, abatement_fiscal_pct, taxable_income_approx, net_before_tax, cfe_estimate_eur}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
revenueYesAnnual revenue (chiffre d'affaires) in euros
categoryNoActivity category: vente (sales), service_bic, service_bnc, liberalservice_bnc

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry behavioral disclosure. It only states the purpose without revealing how results are returned (e.g., net income vs social charges breakdown), any assumptions about tax year, or limitations. Minimal behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no redundant words. Could be improved by front-loading output info, but it is concise for its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and description does not clarify the return format. For a calculation tool, users need to know what the result includes (e.g., net income figure, social charges amount). Additionally, no mention of tax rules or assumptions, so completeness is low.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema describes both parameters (revenue and category) with descriptions and enum. Schema coverage is 100%, so baseline is 3. Description adds no extra meaning beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates net income and social charges for French auto-entrepreneur. The verb and resource are specific. However, it does not explicitly differentiate from similar tools like calculate_french_salary or other French tax calculators, though the auto-entrepreneur context is unique.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs other French tax/income calculators. No explicit when-not or alternative tools mentioned. The description implies use for auto-entrepreneur but gives no context for exclusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_averageBInspect

Compute simple, weighted, geometric, or harmonic mean. Use for grade averages, returns, or rates. Inputs: values list, optional weights, mode. Returns mean and detail. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valuesYesArray of numbers
weightsNoOptional weights for weighted average

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should disclose behavioral traits. The description only says 'Calculate', which implies a read-only operation, but it does not explicitly state that the tool does not mutate state, require authentication, or have side effects. For a computation tool this is acceptable, but a higher score would demand explicit safety guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence of six words, conveying the essential information without any fluff. Every word earns its place, and the structure is front-loaded with the verb and core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should explain what the tool returns (e.g., a number), how it selects between simple, weighted, and geometric means (e.g., based on presence of weights), and any constraints (e.g., values must be positive for geometric mean). Critical details are missing, reducing its completeness for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds ambiguity. It mentions 'geometric mean' as a third type, yet the schema has only two parameters (values and weights) with no parameter to select the type. The description therefore implies a capability not supported by the schema, potentially misleading an agent into thinking there is a way to explicitly choose 'geometric'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates three specific types of means (simple, weighted, geometric). It uses a specific verb ('Calculate') and resource ('mean'), and distinguishes itself from other 'calculate_*' sibling tools by naming the exact mathematical operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies it is for general mean calculations, but it does not mention when not to use it (e.g., for median or mode) or if other available tools might be more appropriate for specific average types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_bac_pointsBInspect

Estimate French Baccalauréat final score from grades and coefficients. Use to track lycée results before official scores. Inputs: grades by subject with coefficients. Returns weighted average and mention. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
grand_oralYesGrand oral (/20, coeff 10)
philosophyYesPhilosophy (/20, coeff 8)
specialty1YesSpecialty 1 (/20, coeff 16)
specialty2YesSpecialty 2 (/20, coeff 16)
french_oralYesFrench oral exam (/20, coeff 5)
french_writtenYesFrench written exam (/20, coeff 5)
continuous_controlYesContinuous assessment score (/720 = 40%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits but only states it calculates a score. It does not mention that there are no side effects, that the tool is read-only, or how the output is structured. The agent is left guessing about behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no extraneous content. It is front-loaded with the key action, though it could benefit from a brief output explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 7 required parameters and no output schema. The description does not explain how the score is computed (e.g., weighted sum out of 100) or what the returned value represents. For a calculation tool, this is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter having a description including coefficient and range. The description adds no additional semantic meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate French Baccalaureat score estimation' clearly states the verb (calculate) and resource (French Baccalaureat score), distinguishing it from sibling tools like calculate_brevet_points and calculate_parcoursup_points which target different exams.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives or any prerequisites. The description does not mention that all seven grades are required or that it computes a weighted total.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_baking_altitudeBInspect

Adjust baking recipe for high altitude (less leavening, more liquid, higher temp). Use for mountain cooking. Inputs: altitude m, ingredients. Returns adjusted recipe. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
altitude_mYesAltitude in meters
flour_cupsYesFlour in cups
sugar_cupsYesSugar in cups
liquid_cupsYesLiquid in cups
oven_temp_cYesOven temperature °C

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fails to disclose behavioral traits like output format, error handling, or mutation side effects; it is too vague.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, front-loaded, and effective, though it could be expanded with usage guidance without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters and no output schema, the description is incomplete—it does not specify the output or limitations, requiring more context for proper agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All five parameters are described in the input schema (100% coverage), so the description does not need to add further parameter semantics; baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it adjusts a baking recipe for high altitude cooking, with a specific verb and resource, distinguishing it from general baking conversion tools like 'calculate_baking_conversion'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as 'calculate_baking_conversion' or when not to use it, leaving the agent to infer without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_baking_conversionCInspect

Convert baking measurements between cups, tablespoons, grams, and milliliters for common ingredients. Use for translating recipes across regions. Inputs: ingredient, value, from-unit, to-unit. Returns converted quantity. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ingredientYes
quantity_cupsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states the conversion action without disclosing behavioral traits such as handling of rounding, precision, idempotency, or side effects. The agent lacks information about what happens when the tool is invoked.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words. It is front-loaded and to the point. However, given the simplicity of the tool, a slightly longer description could provide more value without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is present, so the description should clarify what is returned. It does not specify the output format (e.g., grams as a number, unit label). The ingredient list is implicit in the schema but not summarized. For a simple conversion tool, the description is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the description adds no meaning beyond the schema. The parameters (ingredient, quantity_cups) are schema-defined but not elaborated. The description does not explain units, ingredient density assumptions, or any constraints beyond the enum list and numeric range.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: convert cups to grams for baking ingredients. It uses a specific verb (Convert) and resource (cups to grams for common baking ingredients). However, it does not distinguish itself from sibling tools like calculate_cooking_conversion or convert_cooking, which may have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. There are no prerequisities, exclusions, or context given for appropriate usage. The description assumes the agent knows when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_base_converterCInspect

Convert numbers between bases (binary, octal, decimal, hexadecimal, any base 2-36). Returns: {input, decimal_value, result}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesNumber as string
to_baseYesTarget base
from_baseYesSource base

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It does not mention input constraints beyond the schema, output format, error handling, or support for negative numbers or fractions, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no extraneous words. It is appropriately concise and front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 3 required parameters and no output schema, the description is too brief. It does not explain the expected input format, return value, edge cases, or provide examples, making it incomplete for a conversion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter descriptions already exist. The tool description adds no additional meaning beyond what the schema provides, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and the resource 'numbers between bases', listing common bases. However, it does not differentiate from the sibling tool 'calculate_number_base_convert', which likely performs the same function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives no guidance on when to use this tool vs alternatives, nor does it mention prerequisites or exclusions. It merely states what it does.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_beam_loadBInspect

Calculate max bending moment and shear for a beam under uniform distributed load. Returns: {load_kN_per_m, max_moment_kNm, max_shear_kN, note}. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
span_mYesBeam span in meters
beam_typeNoSupport typesimply_supported
load_kg_per_mYesDistributed load in kg/m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavior. It states the calculation but does not mention that it is a pure computation (no side effects), nor does it describe the output format or any assumptions (e.g., standard formulas, units).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, direct sentence that front-loads the main action. No wasted words, efficient for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with no output schema, the description is adequate but incomplete. It mentions the input type (uniform distributed load) but does not specify output values or units, which is necessary for an agent to use the result correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers all parameters with descriptions (100% coverage). The description adds no extra meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates max bending moment and shear for a beam under uniform distributed load. It specifies the resource (beam) and the computation, but does not explicitly differentiate from sibling tools beyond the name, though the name suggests specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives or prerequisites. The description only states what it does, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_belgian_car_advantageBInspect

Calculate Belgian benefit-in-kind for company car (avantage de toute nature voiture). Returns: {co2_reference, benefit_rate_pct, annual_taxable_benefit, monthly_taxable_benefit, estimated_monthly_tax_impact}. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
co2YesCO2 emissions in g/km
fuel_typeNoFuel typeessence
catalog_valueYesCatalog value of the vehicle (HTVA) in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Only states calculation purpose without disclosing mutability, data requirements, or return format. Lacks context on tax year applicability or formula dependencies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise and front-loaded. However, could be more informative without significantly increasing length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should clarify what the result is (e.g., monthly/annual amount, currency). Does not specify output format or units, leaving ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds no additional meaning beyond parameter names and types. Does not explain how parameters relate to the calculation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb (Calculate) and specific resource (Belgian benefit-in-kind for company car), including the French term. It is distinct from sibling tools which cover other Belgian calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Does not mention prerequisites, year dependence, or comparison to other Belgian tax tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_belgian_donationBInspect

Compute Belgian donation tax (droits de donation) by region (Bruxelles/Flandre/Wallonie). Use for estate planning in Belgium. Inputs: region, amount, recipient relation. Returns tax due and effective rate. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesDonation amount in euros
relationshipYesRelationship: direct_line (parents/children), between_spouses (or cohabitants), others

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description fails to disclose behavioral traits such as read-only nature, side effects, or dependencies. The word 'calculate' implies no side effects, but this is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise and front-loaded with the action. However, it could include more detail without being verbose, such as output description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description lacks output specification (e.g., returns tax amount), assumptions, or regional scope (Wallonia only). Incomplete for a 2-parameter tool with no output schema or annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters fully; description adds 'Wallonia rates' context but does not elaborate on how relationship or amount affect the calculation beyond the schema. Marginal added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool calculates Belgian donation tax with Wallonia rates, specifying the action and jurisdiction, and distinguishes from related tools like calculate_belgian_income_tax.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Lacks mention of other Belgian regions (Flanders, Brussels) or that it's specific to Wallonia, which could lead to misuse.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_belgian_income_taxBInspect

Calculate Belgian personal income tax (IPP/PB) using 2026 progressive brackets. Returns: {income, tax_free_amount, taxable_base, income_tax, effective_rate_pct, marginal_rate_pct, ...}. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
incomeYesAnnual taxable income in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It states the tool calculates tax but does not mention side effects, safety, authentication, rate limits, or return behavior. For a calculation tool, it likely is safe, but this is not confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single sentence with no wasted words. It front-loads the verb and object, making it efficient and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain what the tool returns (e.g., tax amount, effective rate). It does not, leaving ambiguity about the output format. For a tax calculation tool, this is a significant gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the single parameter 'income' (described as 'Annual taxable income in euros'). The description does not add any extra meaning beyond the schema, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Calculate Belgian personal income tax (IPP/PB) using 2026 progressive brackets', providing a specific verb, resource, and distinctive detail about the tax year and bracket type. This differentiates it from siblings like calculate_belgian_salary and other Belgian tax tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for computing Belgian personal income tax based on 2026 brackets but provides no explicit guidance on when to use this tool versus alternatives, nor any exclusions or prerequisites. Usage context is implied but not elaborated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_belgian_pensionBInspect

Estimate Belgian retirement pension based on career years and average salary. Use for retirement planning in Belgium. Inputs: years of contribution, average salary, status (employee/self-employed). Returns monthly pension estimate. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
career_yearsYesNumber of career years
average_salaryYesAverage annual salary in euros
household_typeNoPension type: single rate (60%) or household rate (75%)single

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It only says 'estimate', which is vague. There is no mention of assumptions, legal basis, or limitations (e.g., only for salaried workers, based on current law).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded, but it could be expanded slightly to improve completeness without becoming verbose. As is, it efficiently conveys the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (3 parameters, no output schema), the description adequately states the tool's purpose. However, it lacks details about the calculation basis (e.g., applies to salaried workers in Belgium, based on current regulations) which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All three parameters have descriptions in the schema (100% coverage), so the baseline is 3. The tool description adds no extra meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates Belgian statutory pension, distinguishing it from generic retirement calculators and other Belgian-specific tools like calculate_belgian_salary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like calculate_retirement_pension or when not to use it. The agent gets no decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_belgian_salaryBInspect

Convert Belgian gross monthly salary to net salary (approximation). Returns: {gross_monthly, social_cotisations_13_07pct, special_social_contribution, professional_withholding_tax, net_monthly, net_annual, ...}. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gross_monthlyYesGross monthly salary in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It mentions 'approximation', which is helpful but does not detail what factors are included (e.g., employee contributions, tax brackets) or the accuracy range. Lacks specific context on limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that effectively communicates the core purpose. However, it could be slightly more informative without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of salary calculations (social security, taxes, deductions), the description is inadequate. No mention of assumptions, accuracy, or what net salary includes. No output schema exists to fill gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the single parameter 'gross_monthly' fully described. The description adds no additional meaning beyond what's in the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts Belgian gross monthly salary to net salary, using specific verb 'convert' and resource 'Belgian gross monthly salary'. Sibling tools like 'calculate_belgian_income_tax' and 'calculate_belgian_social_contributions' are distinct, so this tool is well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as 'calculate_belgian_income_tax' or 'calculate_belgian_social_contributions'. No prerequisites, limitations, or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_belgian_social_contributionsBInspect

Calculate Belgian self-employed social contributions (cotisations INASTI). Returns: {annual_income, tier1_up_to_73850_at_20_5pct, tier2_73850_to_108785_at_14_16pct, tier3_above_108785, total_contributions, effective_rate_pct}. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_incomeYesAnnual net professional income in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It does not disclose that the tool is read-only (likely) or any other side effects. For a calculation tool, minimal risk, but transparency is lacking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with the key action and object. It is concise and contains no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is minimally adequate but does not explain the return value or confirm the domain (e.g., Belgian self-employed). Lacks completeness for a fully autonomous agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a description for the single parameter 'annual_income' ('Annual net professional income in euros'). The tool description adds no extra information beyond the schema, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Calculate') and the specific resource ('Belgian self-employed social contributions (cotisations INASTI)'). It uniquely identifies the tool among many Belgian siblings by using the precise legal term INASTI.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, nor any conditions or prerequisites (e.g., must be self-employed in Belgium). The description omits context about eligibility or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_belgian_vatCInspect

Calculate Belgian VAT — convert between HT and TTC. Returns: {amount_ht, amount_ttc, vat_amount, vat_rate}. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoInput mode: ht=before tax, ttc=after taxht
rateNoVAT rate: 6%, 12% or 21%21
amountYesAmount in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. However, it only gives a brief purpose and does not disclose behavioral traits such as rounding, return format, or whether both HT and TTC are output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One concise sentence that front-loads the purpose with no wasted words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema provided, and description does not explain the return value or behavior (e.g., whether it returns the converted amount or both values). Incomplete for a tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all parameters have descriptions). The description adds no additional meaning beyond the schema, but baseline is 3 due to high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Calculate Belgian VAT' and mentions conversion between HT and TTC, which is the core purpose. It distinguishes from sibling VAT calculators by specifying 'Belgian'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like calculate_french_vat or calculate_uk_vat. The description does not provide when-to-use or when-not-to-use context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_biorhythmCInspect

Compute physical, emotional, and intellectual biorhythm cycles for a date based on birth date. Use for self-tracking enthusiasts (pseudoscience). Inputs: birth date, target date. Returns 3 cycle values (-100 to +100) and zone. See list_bundles for related 'fun' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
birth_dateYesBirth date YYYY-MM-DD
target_dateYesTarget date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose behavioral traits such as output format, data requirements, or side effects. The agent is left unaware of what the tool returns or any constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that conveys the core function without extraneous words. However, it is too brief to cover necessary detail, losing a point for completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, the description should provide essential context about return values or usage expectations. It fails to do so, leaving significant gaps for a calculator tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides descriptions for both parameters (birth_date and target_date) with format YYYY-MM-DD, achieving 100% coverage. The description adds no additional meaning beyond the schema, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates three biorhythm cycles (physical, emotional, intellectual) using birth and target dates. It is specific and distinct from many sibling calculators, though it does not explicitly differentiate itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not mention prerequisites, context, or when not to use it, leaving the agent to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_blood_alcoholCInspect

Estimate blood alcohol content (BAC) using Widmark formula. Returns: {bac_percent, legal_status_fr, estimated_sober_in_hours}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sexYesBiological sex
drinksYesNumber of standard drinks (1 drink = 14g pure alcohol)
weight_kgYesBody weight in kilograms
hours_drinkingYesHours elapsed since first drink

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only mentions the formula name without disclosing limitations (e.g., Widmark formula is an approximation, ignores food intake or metabolism variations). The description lacks critical behavioral context for a health-sensitive tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 7 words is extremely concise and front-loaded with purpose. However, it may be too brief for a tool requiring usage guidance; conciseness is positive but not at expense of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description does not mention what the tool returns (e.g., BAC value, units, or confidence intervals). For a calculation tool with 4 parameters, this omission significantly reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has full coverage (100%) with descriptions for each parameter. The description adds no extra meaning beyond the schema, as 'using Widmark formula' is the only additional context. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states 'estimate blood alcohol content (BAC) using Widmark formula', which clearly identifies the tool's action (estimate), resource (BAC), and method (Widmark formula). However, it does not differentiate from sibling tools like calculate_bac_points or calculate_alcohol_units, which may cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Sibling tools exist for BAC or alcohol units, but the description provides no context on appropriate use cases or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_bmiBInspect

Calculate Body Mass Index (BMI) and weight category. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
height_cmYesHeight in centimeters
weight_kgYesWeight in kilograms

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description offers minimal behavioral insight. It does not mention output format, validation rules beyond schema minimums, or edge cases. Users must infer behavior from the function name alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the core purpose. It is efficient with no wasted words, though it could benefit from slightly more detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple BMI calculator with complete schema coverage, the description is adequate but lacks details on return values or category interpretation. It meets minimal viability but is not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and both parameters have descriptions in the schema. The description adds no extra meaning beyond the schema, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Body Mass Index (BMI) and weight category, using a specific verb and resource. It distinguishes itself among many sibling calculators by explicitly naming 'BMI' and 'weight category', making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, prerequisites, or situations to avoid. Given the large set of sibling calculators, the description lacks explicit context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_bmrAInspect

Calculate Basal Metabolic Rate using Mifflin-St Jeor equation. Returns: {bmr_kcal}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesAge in years
sexYesBiological sex
height_cmYesHeight in centimeters
weight_kgYesWeight in kilograms

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It only states the equation but does not disclose that it is a read-only calculation, any limitations (e.g., not for athletes), or that it returns kcal/day.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the purpose. No extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator with no output schema, the description is adequate but omits the output unit (kcal/day). It is complete enough for an agent familiar with BMR but could be improved.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions, so baseline is 3. The description adds value by naming the Mifflin-St Jeor equation, which explains the relationship between parameters and the formula used.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Calculate Basal Metabolic Rate using Mifflin-St Jeor equation' with a specific verb and resource, clearly distinguishing it from sibling calculators. The method is named, adding precision.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like calculate_tdee. It lacks context for prerequisites or exclusions, leaving the agent to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_body_fatBInspect

Estimate body fat percentage from BMI, age and sex using Deurenberg equation. Returns: {body_fat_pct}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesAge in years
bmiYesBody Mass Index
sexYesBiological sex

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description must convey behavioral traits. It states the tool estimates but does not disclose whether it is read-only, has side effects, or handles errors. A simple calculation tool likely has no side effects, but this is not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 14 words, directly stating purpose and method. No redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation with complete schema, the description is mostly adequate. However, it does not mention the output format (e.g., percentage) or limitations. Given many siblings, slightly more context would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions, so the baseline is 3. The description adds the context of the Deurenberg equation but does not provide further semantic detail beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Describes the tool as estimating body fat percentage from BMI, age, and sex using the Deurenberg equation. The verb 'Estimate' and resource 'body fat percentage' are clear. The mention of the specific equation distinguishes it from siblings like calculate_body_fat_navy, though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as calculate_body_fat_navy. The description does not specify when not to use it or provide context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_body_fat_navyBInspect

Calculate body fat percentage using the US Navy circumference method. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sexYesBiological sex
hip_cmNoHip circumference in cm (widest point, required for females)
neck_cmYesNeck circumference in cm (below larynx)
waist_cmYesWaist circumference in cm (at navel for males, narrowest for females)
height_cmYesHeight in centimeters
weight_kgNoBody weight in kg (default 70kg for fat mass calculation)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose any behavioral traits beyond the method name. It does not mention that hip_cm is required only for females, or that weight_kg defaults to 70, or explain the output format. With no annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that is front-loaded and efficient. Every word is necessary and adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is too minimal. It does not explain the output, edge cases, or that the method requires hip circumference for females. Given the tool's complexity (6 parameters, specific method), it should provide more context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no extra meaning beyond what is in the schema, earning the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: calculating body fat percentage using the US Navy circumference method. This distinguishes it from sibling tools like 'calculate_body_fat' (likely a different method) and 'calculate_bmi'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs alternatives, such as when a female user must provide hip measurement. The description lacks any context for selection among sibling calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_bpm_to_msAInspect

Convert BPM tempo to millisecond delay times for different note values. See list_bundles for related 'musique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bpmYesTempo in beats per minute
note_valueYesMusical note value to convert

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It correctly implies a pure calculation with no side effects, but lacks details such as rounding behavior, output format, or confirmation that it returns a single value. The description is adequate for a simple conversion but could be more informative.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that immediately conveys the tool's function. There is no wasted text, and it is front-loaded with the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is nearly complete. It states the input and output. However, it could mention the output format or that it returns delay times in milliseconds, which is implied but not explicit. Still, it is sufficient for an agent to use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have descriptions in the schema (100% coverage), so the baseline is 3. The description adds little beyond the schema—it mentions 'millisecond delay times' but does not explain how the conversion works or the musical context of note values. It does not significantly enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: converting BPM tempo to millisecond delay times for different note values. This is specific, uses a verb ('convert') and resource (BPM to ms), and distinguishes it from the many other calculation tools on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives, or when not to use it. The description only states what it does, leaving the agent to infer its applicability from context. This is a significant gap for a tool with many siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_braking_distanceBInspect

Compute reaction + braking distance by road condition (dry/wet/icy). Use for driver safety education. Inputs: speed km/h, reaction time s, road type. Returns total stopping distance m. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
conditionNoRoaddry
speed_kmhYesSpeed km/h

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. It does not mention underlying assumptions (e.g., reaction time, formula), limitations, or whether the result is for a standard vehicle. The description is too vague for a calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (7 words) and front-loaded with key information. It is concise but could be improved by adding a bit more context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator with two parameters and no output schema, the description is nearly complete. However, it lacks details on what factors influence the calculation (e.g., reaction time constant) and what the output represents (distance in meters or feet), leaving some ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters, so the schema already defines them. The description adds context (reaction + braking) but no new semantic detail beyond the schema. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool calculates 'Reaction + braking distance by road condition', which is a specific verb-resource combination. It distinguishes from the many other calculate_* tools on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for braking distance calculations, but provides no explicit guidance on when to use this tool versus alternatives, nor any conditions or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_braquetBInspect

Compute cycling gear ratio (braquet) and development per pedal turn. Use for road cycling gear analysis. Inputs: chainring teeth, sprocket teeth, wheel diameter mm. Returns ratio and meters per pedal turn. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cog_teethYesNumber of teeth on the rear cog
chainring_teethYesNumber of teeth on the front chainring

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states 'speed at various cadences' but the input schema does not include a cadence parameter, leaving ambiguity about how speed is computed. The behavior is partially disclosed but key aspects (e.g., assumed cadences, output structure) are missing, which could confuse the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no extraneous words. It is front-loaded with the primary purpose. However, it could be slightly more structured (e.g., separate sentence for speed) but within one sentence it's acceptable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain what the tool returns. It mentions gear ratio and speed but does not specify units, range, or format. The lack of cadence parameter further reduces completeness. An agent would need to infer or test the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the input schema already fully describes both parameters. The description adds 'bicycle gear ratio' and 'speed at various cadences' but does not elaborate on parameter roles beyond what's in the schema. It provides marginal additional meaning, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates bicycle gear ratio (braquet) and speed at various cadences. This distinguishes it from sibling tools like calculate_gear_ratio which may be for general vehicles, and calculate_cycling_power which focuses on power. The verb 'calculate' and specific resource 'bicycle gear ratio' make purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While the name and description imply it's for bicycle gear ratios, there is no mention of exclusions (e.g., not for car gear ratios) or context for using this over similar tools like calculate_gear_ratio. Usage context is implied but not spelled out.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_bra_sizeBInspect

Calculate bra size in FR, US or UK system from underbust and bust measurements (cm). Returns: {band_size, size, cup_diff_cm}. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
systemYes
bust_cmYes
underbust_cmYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must disclose behavioral traits. It only states the function without any mention of assumptions, rounding, accuracy, or limitations. The tool is deterministic, but transparency is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the tool's purpose and inputs without unnecessary words. It is well-structured with key elements front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is present, and the description does not indicate what the result looks like (e.g., size string). It lacks details on precision, rounding, or how measurements are used. For a complete tool, this is insufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but parameter names are self-explanatory (underbust_cm, bust_cm, system). The description adds units (cm) and lists the systems, but does not elaborate on constraints like exclusiveMinimum or provide additional context. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: calculating bra size from underbust and bust measurements in three systems (FR, US, UK). It uses a specific verb ('calculate') and resource ('bra size'), and distinguishes from siblings like 'calculate_bra_size_convert'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit usage guidelines are provided. The description implies use when bra size calculation is needed, but does not mention when to use alternatives or any prerequisites. The context of simple calculation is clear, but lacks depth.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_bra_size_convertAInspect

Convert bra size between FR, US, UK and EU systems. Returns: {FR, US, UK, EU}. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cupYesCup letter (A, B, C, D, DD, E, F)
band_sizeYesBand size in source system (numeric)
from_systemYesSource sizing system

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. The description is straightforward ('Convert bra size') and implies a simple transformation without side effects, but it does not disclose any behavioral traits such as error behavior, rate limits, or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and directly states the tool's function. It is front-loaded with the core purpose, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool with no output schema, the description covers the basic purpose but lacks details on return format (e.g., single result or all systems) and potential error cases. It is minimally complete but could be improved with examples or output clarification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers all 3 parameters with descriptions, achieving 100% coverage. The description adds little beyond listing the systems, which is already captured by the enum in 'from_system'. Therefore, the description provides marginal added value, warranting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: converting bra sizes between four specified systems (FR, US, UK, EU). It uses a specific verb ('convert') and resource ('bra size'), and the mention of multiple systems distinguishes it from siblings like 'calculate_bra_size'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives, nor does it mention when not to use it. There is no explicit context or exclusion criteria, leaving the agent to infer usage from the tool name and siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_bread_hydrationBInspect

Compute baker's hydration % = water/flour×100. Use for bread recipe analysis. Inputs: flour g, water g. Returns hydration % and dough type (firm/standard/wet). See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
flour_gramsYesFlour weight grams
water_gramsYesWater weight grams

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It only states the calculation but does not reveal the output format (e.g., returns a percentage value like 65 or 0.65), any constraints, or side effects. This is a significant gap for a tool that returns a result the agent must interpret.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, brief sentence with no unnecessary words. It is front-loaded with the key action and resource, making it easy to scan. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 numeric inputs, straightforward calculation), the description is somewhat complete, but it lacks information about the output value's format and interpretation. Since there is no output schema, the description should clarify what 'hydration percentage' means numerically.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema, with 100% description coverage, already provides clear parameter meanings ('Flour weight grams', 'Water weight grams'). The description does not add any extra semantic value beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: calculate bread dough hydration percentage. It uses a specific verb ('calculate') and resource ('bread dough hydration percentage'), making the purpose unambiguous. However, it does not explicitly distinguish itself from other calculate_ tools, though the domain (bread hydration) is unique among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when one has flour and water weights, but it does not explicitly state when to use this tool versus others, nor does it provide exclusions or alternative tools. For a simple calculation, this implicit guidance is adequate but lacks explicitness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_break_evenBInspect

Compute break-even point in units and revenue. Use for business plans and pricing decisions. Inputs: fixed costs, price/unit, variable cost/unit. Returns break-even units and revenue. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
fixed_costsYesTotal fixed costs
price_per_unitYesSelling price per unit
variable_cost_per_unitYesVariable cost per unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits like handling of non-positive variable costs or zero price, output format, or potential errors. The agent cannot infer behavior beyond 'calculation'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that efficiently communicates the tool's purpose. It is front-loaded and contains no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain return values or provide usage context. It does not specify the output format (e.g., units vs. revenue separately), nor any edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all three parameters with descriptions, achieving 100% coverage. The description adds no additional semantic value beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Calculate break-even point (units and revenue)', which clearly identifies the verb, resource, and expected outputs. It distinguishes itself from numerous sibling calculate tools by specifying a unique financial metric.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other financial calculators. It does not mention prerequisites, limitations, or context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_breeding_due_dateCInspect

Compute due date for animal breeding given mating date and species gestation period. Use for breeders. Inputs: species (dog/cat/horse/rabbit/cow), mating date. Returns due date and gestation milestones. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
animalYes
mating_dateYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the tool's non-destructive calculation nature, but does not specify the output format or any edge cases. The transparency is adequate but not enhanced beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's purpose. It is appropriately concise, though it could expand slightly to include parameter details without losing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 parameters and no output schema, the description should at least mention the supported animals and date format. It fails to do so, leaving significant gaps in understanding. The presence of many similar sibling tools amplifies the need for more completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, yet the description provides no information about the two parameters (animal with enum, mating_date with pattern). The agent must rely solely on the schema, which lacks explanations. The description adds no value for parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'expected birth date from mating date for common pets'. It distinguishes itself from more specific sibling tools like calculate_dog_pregnancy or calculate_cat_pregnancy by covering multiple pet types, though it does not explicitly differentiate from similar tools like calculate_due_date.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like calculate_dog_pregnancy or calculate_pregnancy_due_date. An agent may select a more specific or less appropriate tool without context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_brevet_pointsBInspect

Estimate French Brevet (DNB) score from grades and continuous-control marks. Use for collège students forecasting their result. Inputs: grades by subject, continuous control. Returns total points and mention. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
mathYesMath exam score (/100)
oralYesOral exam score (/100)
frenchYesFrench exam score (/100)
scienceYesScience score (/50)
history_geoYesHistory-Geography score (/50)
socle_communYesSocle commun points (50-400)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fails to disclose any behavioral traits beyond the obvious calculation. It does not explain what the tool returns (e.g., total score, pass/fail indication) or any side effects. The description adds minimal value beyond the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no wordiness. It efficiently conveys the core purpose. However, it could be slightly more informative without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 6 required parameters with different scales and no output schema, the description is incomplete. It does not explain the calculation formula, what the result represents (e.g., total score out of 500), or any interpretation of the score. More context is needed for correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers 100% of parameters with descriptions and ranges, so the baseline is 3. The description does not add additional meaning beyond the schema, but it does confirm the overall purpose. No extra clarity on how parameters combine.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific resource ('French Brevet score') and the action ('Calculate'), distinguishing it from other calculate tools on the server. The name and description align perfectly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, context, or exclusion criteria. For a tool with many sibling calculators, this is a significant gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_brick_countBInspect

Compute bricks or blocks needed for a wall including waste margin. Use for masonry projects. Inputs: wall dimensions, brick size, waste %. Returns brick count and pallets. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoBrick typeparpaing
height_mYesWall height m
length_mYesWall length m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states the purpose but does not disclose behavioral traits such as whether it is read-only, what it returns, or any side effects. For a calculator tool, it could mention that it performs a mathematical calculation and returns a number.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, which is concise but minimally informative. It lacks any additional structure or details that would help the agent. It is not verbose but also not sufficiently helpful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple, but with no output schema, the description should at least hint at the return value (e.g., number of bricks). It does not explain the calculation method or assumptions. For a basic calculator, it is adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all 3 parameters. The description adds no additional meaning beyond what the schema already provides (e.g., wall dimensions and brick type). Thus, baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates bricks/blocks for a wall. The verb 'calculate' and resource 'bricks/blocks for a wall' are specific, and it distinguishes from sibling tools like calculate_tile_quantity or calculate_paint_needed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., calculate_tile_quantity). There is no mention of prerequisites, limitations, or when not to use it. The description only states what it does, not when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_buoyancyCInspect

Compute buoyancy force, displaced volume, or floating analysis (Archimedes). Use for physics or shipping. Inputs: object mass/volume, fluid density. Returns buoyant force and float/sink verdict. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
volume_m3YesObject volume m³
object_massYesObject mass kg
fluid_densityNoFluid density kg/m³

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility for behavioral disclosure. It fails to state whether the tool returns numeric values, boolean outcomes, or descriptive text. No mention of default fluid density behavior or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded, consisting of a single concise phrase. However, the lack of structure (e.g., no clear separation of purpose vs. behavior) reduces its effectiveness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should clarify return values and units. It does not, leaving the agent uncertain about what the tool produces.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All three parameters are fully described in the input schema, so the description adds no extra meaning beyond what is already available. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's domain (buoyancy) and general purpose (force and floating analysis), distinguishing it from other 'calculate_*' tools. However, it lacks specificity on what exactly is computed (e.g., buoyant force magnitude) and does not mention output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as other physics-related calculations. The agent receives no context about typical use cases or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_burn_rateCInspect

Compute startup monthly burn rate and runway. Use for fundraising or expense control. Inputs: cash on hand, monthly expenses, monthly revenue. Returns burn, runway in months, profitability date. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cash_balanceYesCash in bank EUR
monthly_revenueNoMonthly revenue EUR
monthly_expensesYesMonthly expenses EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It only states 'burn rate and runway' without specifying whether it returns a single value or both, what unit (e.g., months, currency/month), or if it performs validation. This is insufficient for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (4 words), but it sacrifices clarity for brevity. It does not earn its place by adding value; it is terse rather than concise, omitting critical details that would help the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (3 required params, no output schema), the description should at least explain the output (e.g., 'returns burn rate per month and runway in months'). It fails to do so, leaving agents guessing about what the tool actually produces.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3 is appropriate. The description adds minimal meaning beyond the schema by mentioning 'burn rate and runway', which hints at the relationship between parameters but does not explain how they relate or what the output depends on.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Startup burn rate and runway' clearly implies the tool calculates financial metrics for a startup, aligning with the tool name. It distinguishes from many sibling tools focused on other calculations, but could be more explicit about the verb (e.g., 'calculate' is only in the name).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool or when to avoid it. The description does not mention prerequisites, alternatives, or context such as that it requires monthly expenses and cash balance inputs (which are in the schema but not described as usage conditions).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cable_sectionCInspect

Compute electrical cable cross-section (mm²) per NF C 15-100. Use for residential wiring. Inputs: power kW, voltage, length, max voltage drop %. Returns required section. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
power_wYesPower W
voltageNoVoltage
length_mYesCable length m
max_drop_pctNoMax voltage drop %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations and a terse description, critical behavioral details are missing. The tool does not disclose assumptions (e.g., material, temperature, single/three-phase), return value format, or any limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, with no unnecessary information. It effectively communicates the core purpose without any waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

As a specialized calculation tool with no output schema, the description should mention what the output represents (e.g., cross-section in mm²) and what standards or assumptions are used. This is missing, making it incomplete for proper use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter, but the description adds no additional meaning beyond those labels. Baseline 3 is appropriate as the schema already documents the properties.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates electrical cable cross-section, which is a specific verb+resource. However, it does not differentiate from the sibling 'calculate_cable_section_electrical', which likely performs a similar function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor are there any prerequisites or typical use cases mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cable_section_electricalCInspect

Calculate cable section from power, voltage, distance and max voltage drop. Returns: {current_a, allowed_drop_v, calculated_section_mm2, recommended_mm2}. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
power_wYesPower in watts
voltageNoVoltage (default 230V)
distance_mYesOne-way cable distance in meters
max_drop_pctNoMax voltage drop % (default 3)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states the basic function, omitting details such as assumptions (e.g., copper material), return format, error handling, or whether it returns standard cable sizes. This lack of transparency hinders effective tool invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that front-loads the essential action. It is appropriately concise for a simple tool, though it could benefit from slightly more structure without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should provide more context about expected results (e.g., units, standard cable gauges). It fails to convey the tool's return value or assumptions, leaving the agent underinformed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters, so the description's mention of parameters adds no extra meaning beyond what the schema already provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates cable section from the given parameters. However, it does not differentiate itself from the sibling tool 'calculate_cable_section', which may have similar functionality, leaving potential ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor are there any prerequisites or conditions for use. The agent is left to infer usage context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cac_ltv_ratioCInspect

Compute Customer Acquisition Cost vs Lifetime Value ratio. Use for SaaS unit economics analysis (target ≥3.0). Inputs: total CAC, LTV. Returns ratio and health verdict. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cacYesCustomer acquisition cost EUR
ltvYesCustomer lifetime value EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It fails to disclose any behavioral traits such as what the output format is (e.g., decimal, percentage), error handling for invalid inputs, or whether it computes a ratio or difference. The description is too vague.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise, but it is underspecified. It could be improved by adding detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 numeric inputs, no output schema, no annotations), the description should at least mention the output (e.g., a ratio value). It fails to provide complete context for an agent to understand what the tool returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers both parameters with descriptions (CAC and LTV in EUR) giving high coverage (100%). The description adds no additional meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Customer acquisition cost vs lifetime value' clearly indicates the tool relates to comparing CAC and LTV. While it does not explicitly state it calculates a ratio, the context of sibling 'calculate_*' tools implies a calculation. It is specific and not a tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other financial or ratio tools. There is no mention of prerequisites, alternatives, or context where this is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_caffeine_clearanceDInspect

Compute remaining caffeine in body over time using 5-hour half-life. Use for sleep planning. Inputs: caffeine mg, time since intake. Returns remaining mg and clearance forecast. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
mgYesCaffeine mg consumed
hoursNoHours since consumption

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description offers no behavioral information such as idempotency, side effects, or computational complexity. For a calculation tool, the description should clarify expected behavior beyond the bare minimum.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only three words, which is overly terse. While concise, it sacrifices clarity and fails to provide useful information, making it under-specified rather than efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is insufficient for a tool with no output schema. It does not explain the return value, formula, or how it differs from sibling tools, making it incomplete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with adequate parameter descriptions in the schema. The description adds no additional meaning or context beyond the schema, achieving the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Caffeine half-life tracker' vaguely suggests tracking half-life but does not specify the tool's output or primary function. It fails to distinguish from siblings like 'calculate_caffeine_half_life' and 'calculate_caffeine_intake', which likely have overlapping purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided. The description does not indicate when to use this tool versus alternative caffeine calculation tools, leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_caffeine_half_lifeCInspect

Calculate remaining caffeine in body after time elapsed. Returns: {hours_to_below_25mg, safe_to_sleep}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
hours_sinceYesHours since consumption
mg_consumedYesCaffeine consumed mg

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits. It fails to mention what the return value represents (e.g., remaining mg, percentage, or time until zero), the assumed half-life constant (typically ~5 hours), or any edge cases (e.g., zero input). This lack of detail may lead to misuse or misunderstanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no unnecessary words. It front-loads the core purpose and omits any fluff, earning the maximum score for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 parameters, no output schema), the description is incomplete. It does not specify the output format or units, the half-life assumption, or handling of edge cases (e.g., negative hours). For a calculation tool, users need to know what result to expect, which is absent here.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% coverage with clear descriptions for both parameters ('hours_since' and 'mg_consumed'). The description does not add substantial meaning beyond the schema—it uses 'time elapsed' and 'remaining caffeine' which loosely map to the parameters but lack precision. Baseline of 3 is appropriate given the schema's sufficiency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (calculate) and resource (remaining caffeine in body) with a condition (after time elapsed). It effectively conveys the tool's core function. However, it does not distinguish from sibling tools like 'calculate_caffeine_clearance' or 'calculate_caffeine_intake', leaving room for confusion about which tool handles the half-life decay scenario.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, such as needing to know the half-life value, or when not to use it (e.g., for multi-dose scenarios). The agent is left to infer usage context without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_caffeine_intakeCInspect

Track caffeine intake against the safe daily limit (400 mg adult). Use for monitoring coffee/tea/soda consumption. Inputs: list of drinks (type, quantity). Returns total mg, % of limit, time-of-day distribution. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
drinksYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must carry behavioral details, but it does not specify the output format (e.g., numeric, comparison result) or what constitutes a 'safe daily limit'. The agent is left uncertain about the tool's exact behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that concisely conveys the core purpose without extraneous words. It is well-structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema, the description should clarify outcomes and constraints. It omits units for quantity, the specific safe limit used, and the nature of the comparison (e.g., boolean or warning). This leaves significant ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description should clarify parameter units and meaning. It does not explain that 'quantity' might be in cups or ml, nor does it elaborate on the drink type enum values. The description adds no value beyond the schema structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates total caffeine intake from beverages and compares it to a safe daily limit. The verb 'calculate' and resource 'caffeine intake' are specific, but it does not differentiate from sibling tools like 'calculate_caffeine_clearance' or 'calculate_caffeine_half_life'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor are there any prerequisites or exclusions mentioned. The description implicitly suggests usage for caffeine assessment, but lacks explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_calories_burnedBInspect

Estimate calories burned during physical activity using MET values. Returns: {calories_burned}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
activityYesType of activity
weight_kgYesBody weight in kilograms
duration_minutesYesDuration in minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions 'using MET values' but does not disclose limitations (e.g., activity intensity not accounted for), return format, or required permissions. The behavioral details are insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with purpose, and contains no fluff. It is concise but could be slightly expanded without losing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description should compensate by explaining what the tool returns (e.g., estimated calories as a number). It does not, leaving the agent unsure of the output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters. The description adds minimal value beyond 'using MET values', which is already implied by the tool name. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('estimate'), resource ('calories burned'), and methodology ('using MET values'). It distinguishes from numerous sibling tools by specifying the focus on physical activity and MET-based calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like calculate_tdee or calculate_bmr. There is no mention of limitations or context where MET-based estimation is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_canada_combined_taxAInspect

Calculate combined Quebec + federal income tax with the Quebec federal abatement (16.5%). Returns: {income_cad, quebec_provincial_tax, federal_gross_tax, federal_abatement_qc, federal_net_tax, total_combined_tax, ...}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
income_cadYesAnnual income in CAD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must convey behavioral traits. It states the calculation includes the Quebec federal abatement, but does not disclose output format, assumptions, or limitations. It is minimally adequate for a simple one-parameter tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no superfluous words. It is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does not explain the return value (e.g., total tax amount, breakdown). It is sufficient for a basic tool but lacks completeness in specifying what the user gets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema already describes income_cad as 'Annual income in CAD'. The description adds the context of combined tax calculation but does not enhance understanding of the parameter itself. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates combined Quebec and federal income tax with the Quebec federal abatement, using a specific verb and resource. It distinguishes itself from sibling tools like calculate_quebec_income_tax and calculate_canada_federal_tax.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool (for combined taxation) but does not explicitly mention alternatives or when not to use it. The context is clear, but explicit guidance would improve it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_canada_eiBInspect

Calculate Canadian Employment Insurance (EI) premiums for Quebec and non-Quebec residents. Returns: {gross_annual_cad, max_insurable_earnings, insurable_earnings, employee_rate_pct, employee_premium}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
provinceNoProvince: QC (Quebec rate) or other (standard rate)QC
gross_annual_cadYesGross annual insurable earnings in CAD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavioral traits beyond the basic calculation. It lacks details like side effects, rate limits, or output specifics (e.g., annual vs monthly premium).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action and scope. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is too minimal. It does not explain calculation behavior, assumptions, or output format, leaving gaps for a 2-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds no additional meaning beyond what the schema already provides. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates Canadian EI premiums, specifying for Quebec and non-Quebec residents. This distinguishes it from sibling tools like calculate_canada_federal_tax.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other Canadian tax calculators). The description only states what it does, not when or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_canada_federal_taxAInspect

Calculate Canadian federal income tax (CRA) with basic personal amount deduction. Returns: {income_cad, basic_personal_amount, taxable_income, federal_tax, effective_rate_pct, marginal_rate_pct, ...}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
income_cadYesAnnual income in Canadian dollars (CAD)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the basic personal amount deduction but does not state whether the calculation is read-only, what output to expect (e.g., tax amount, effective rate), or any other behaviors. However, the name and description imply a deterministic calculation without side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, front-loaded with the verb and resource. Every word is meaningful and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is somewhat complete but lacks details such as the tax year, whether it handles only the basic personal amount, and the format of the return value. A more complete description would include these elements for agent clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter 'income_cad'. The description adds the context of 'basic personal amount deduction' but does not add significant meaning beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Canadian federal income tax (CRA) and includes the basic personal amount deduction. It is specific about the resource (federal tax) and distinguishes from sibling tools like calculate_canada_combined_tax (which includes provincial) and calculate_quebec_income_tax (provincial).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With many sibling tax calculators, explicit when-to-use or when-not-to-use advice is missing, forcing the agent to infer from the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_canada_rrqBInspect

Calculate Quebec Pension Plan (RRQ) contributions for employee. Returns: {gross_annual_cad, rrq_base_earnings, rrq_contribution_tier1, rrq_additional_earnings, rrq_contribution_tier2, total_rrq_contribution, ...}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gross_annual_cadYesGross annual earnings in CAD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It fails to disclose behavioral traits: whether the contribution is employer or employee share, if return values are annual, or any limits or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence without wasted words. However, it could be slightly more informative while remaining concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation tool with no output schema and a single parameter, the description does not explain the return format or any important context about the calculation (e.g., whether it's for 2024 rates, employer/employee split). The description is too minimal for the task's specificity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (single parameter with description 'Gross annual earnings in CAD'). The description adds no additional meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Quebec Pension Plan (RRQ) contributions for an employee, using a specific verb and resource. Among many similarly named siblings (e.g., calculate_canada_ei, calculate_quebec_income_tax), it distinguishes itself by specifying the 'RRQ' and 'employee' context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention that RRQ is a mandatory Quebec deduction or that related tools like calculate_canada_combined_tax might be used for total deductions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_capital_gains_propertyBInspect

Compute French property capital gains tax (plus-value immobilière) including duration abatements. Use for sellers of secondary residence. Inputs: purchase price, sale price, years held, work cost. Returns tax due, social charges, net gain. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sale_priceYesSale price in euros
years_heldYesNumber of years the property was held
purchase_priceYesOriginal purchase price in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It only states the calculation purpose without mentioning whether it is read-only, any authentication needs, rate limits, or what the output contains. This is insufficient for a tax calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that immediately conveys the tool's purpose. No redundant or irrelevant information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator tool with a complete input schema, the description is minimally adequate. However, it lacks context about tax rules (e.g., indexation, exemptions) and does not mention output structure, which could be important for an agent to interpret results correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add any extra meaning beyond the parameter names and types; it simply restates the overall purpose. A higher score would require the description to explain parameter relationships or special values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates French property capital gains tax (plus-value immobiliere), which is a specific verb-resource pair. However, it does not differentiate from the sibling tool 'calculate_property_capital_gains_fr', which likely serves the same purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor any prerequisites or exclusions. It only states what it does, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_carbon_footprintBInspect

Estimate annual personal carbon footprint (tCO₂e). Use for sustainability awareness. Inputs: housing, transport, diet, lifestyle. Returns total emissions and breakdown. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
kwhNoElectricity kWh/year
km_carNoCar km/year
km_planeNoFlight km/year
meat_kg_weekNoMeat kg/week

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It simply states the tool estimates, but lacks details on methodology, return format, or units. No mention of what happens with zero inputs or default behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence). While it is front-loaded and contains no fluff, it is perhaps too brief given the lack of annotations and output schema. Still, it is not verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should indicate what the result represents (e.g., tons of CO2). It does not. Also, with four parameters and no explanation of interaction or order, the description is insufficient for full understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter already well-described (e.g., 'Electricity kWh/year'). The description adds no more information, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool estimates annual carbon footprint, with a specific verb ('Estimate') and resource ('annual carbon footprint'). It distinguishes well from siblings like 'calculate_carbon_sequestration' which covers a different concept.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, such as 'calculate_carbon_sequestration' or other environmental calculators. The description does not mention context, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_carbon_sequestrationBInspect

Estimate CO2 sequestration by trees over their lifetime. Returns: {age_factor, annual_kg_co2_per_tree, annual_kg_co2_total, lifetime_kg_co2, lifetime_tonnes_co2, equivalent_cars_off_road_1yr}. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of trees (default 1)
age_yearsYesAge of the trees in years
tree_typeYesSpecies of tree

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It mentions 'over their lifetime' but does not disclose assumptions, data source, output format, or limitations (e.g., whether it considers tree species differences). This is insufficient for a tool estimating carbon sequestration.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that is front-loaded with the key purpose. However, it omits important guidance, which slightly reduces effectiveness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters and no output schema, the description should provide more context about the estimation method, units, and usage. It lacks completeness for informed decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already well-documented in the schema. The description adds no additional meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates CO2 sequestration by trees over their lifetime, which is a specific and unambiguous purpose. It distinguishes from sibling tools like calculate_carbon_footprint, which covers broader carbon footprint, making it easy to select.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as calculate_carbon_footprint. It lacks context on prerequisites or scenarios where this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_card_draw_probabilityAInspect

Calculate hypergeometric probability of drawing specific cards from a deck. Returns: {odds_one_in}. See list_bundles for related 'jeux-probabilites' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
deck_sizeNoTotal number of cards in the deck (default 52)
draw_countYesNumber of cards drawn
target_cardsYesNumber of target cards wanted in the draw
cards_in_deck_matchingYesNumber of target cards in the deck

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description mentions 'hypergeometric probability', which implies sampling without replacement, but does not disclose additional behavioral traits like assumptions, edge cases, or limitations. It provides basic transparency but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no fluff, front-loaded with action. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema. Description does not mention return value (e.g., a probability from 0 to 1). While input parameters are well covered, the output is left unspecified. Adequate but incomplete for a probability tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described. The description adds no additional information beyond the schema. Baseline score of 3 is appropriate as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the specific resource 'hypergeometric probability of drawing specific cards from a deck'. It distinguishes from siblings like 'calculate_dice_probability' and 'calculate_probability_binomial' by specifying 'card draw' and 'hypergeometric'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. Usage is implied by the tool's name and description, but there are no when-not conditions or references to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_car_depreciationAInspect

Calculate car residual value: Y1:-25%, Y2:-15%, Y3:-10%, Y4-5:-8%, Y6+:-5%. Returns: {residual_value, total_dep, pct_lost}. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
age_yearsYesAge in years
purchase_priceYesOriginal price

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description exposes the exact depreciation percentages per year, which is the core behavior. Although no annotations exist, this level of detail allows an agent to understand the calculation logic. It does not disclose return format or potential error conditions, but for a simple calculator this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys all necessary information: the purpose and the precise depreciation rates. No extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 parameters, no output schema, no annotations), the description covers the calculation logic fully. It could be improved by specifying the return value (e.g., 'returns the residual value in the same currency'), but the current content is mostly complete for practical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides descriptions for both parameters (age_years, purchase_price). The description adds significant value by explaining exactly how these parameters are combined (depreciation schedule), giving the agent full understanding of the formula.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates car residual value, defining specific annual depreciation rates. This verb+resource combination is distinct from sibling tools, many of which are unrelated calculation functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., calculate_car_lease_vs_buy). The description only states what it does without usage context or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_car_lease_vs_buyCInspect

Compare leasing vs buying a car over the same period. Use for automotive purchase decisions. Inputs: car price, lease monthly cost, loan rate, ownership years. Returns total costs and recommendation. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
car_priceYesCar purchase price EUR
loan_rateYesLoan annual rate percent
loan_monthsYesLoan duration months
lease_monthsYesLease duration months
lease_monthlyYesMonthly lease payment EUR
residual_valueYesCar residual value at lease end EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the burden of behavioral disclosure. It only states the comparison purpose but does not mention that it computes totals, assumptions, or output format (no output schema). Lacks transparency on mutation, safety, or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It could be slightly expanded for clarity without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 6 numeric parameters but no output schema. The description provides minimal context; it does not explain what the comparison output is (e.g., cost breakdown, recommendation). Adequate for a simple calculator but lacking for full understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so each parameter already has a description. The tool description adds no extra semantic meaning beyond the schema, meeting baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares car leasing versus buying with a loan, specifying the verb 'Compare' and the resource. It is distinct from siblings like 'calculate_car_depreciation', though not explicitly differentiating.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., other car cost calculators). No when/when-not or sibling references are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_carpet_flooringBInspect

Compute flooring cost including waste margin. Use for renovation budget. Inputs: surface m², product price/m², waste %. Returns total cost and m² to order. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
width_mYesWidth m
length_mYesLength m
price_m2NoEUR/m²
waste_pctNoWaste %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations present. The description does not disclose any behavioral traits like what happens with inputs, error conditions, or output format. It only states the purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, very concise and front-loaded. It wastes no words, though it lacks some detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and the description does not mention what the tool returns (e.g., total cost, area, waste amount). For a simple calculator with 4 parameters, more context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond what the schema already provides for each parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Calculate flooring cost with waste'. It uses a specific verb and resource, distinguishing it from many other 'calculate_' sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool over alternatives (e.g., other flooring calculators like calculate_tile_quantity). No exclusions or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cat_ageCInspect

Convert cat age to human-equivalent years (15+9+4×). Use for feline health. Inputs: cat age years. Returns human-equivalent age and life stage. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cat_yearsYesCat age in years

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description does not disclose behavioral traits such as the conversion formula, return type, or handling of edge cases (e.g., fractional years). The agent has little insight into the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (5 words) but lacks structure and completeness. While brevity is positive, the description is too sparse to be fully effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and simple parameters, the description should mention the return value or conversion methodology. It does not specify if a linear (e.g., 1 cat year = 7 human years) or nonlinear formula is used, leaving the agent uncertain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with cat_years described as 'Cat age in years'. The tool description adds 'human years' but does not elaborate on the parameter beyond the schema. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Cat age in human years' clearly states the tool's purpose: converting a cat's age to its human equivalent. It is distinct from sibling tools like calculate_dog_age, though it lacks an explicit verb like 'calculates'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as calculate_dog_age or calculate_pet_age. The description does not mention exclusions or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cat_foodCInspect

Calculate daily cat food quantity based on weight, age and lifestyle. Returns: {kcal_per_day}. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYes
indoorNo
weight_kgYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should have disclosed behavioral traits like read-only nature, safety, or required permissions. It does not, leaving the agent uninformed about side effects or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, free of fluff and front-loaded. However, it could be slightly expanded without losing conciseness to cover parameter meanings.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and 3 parameters, the description is insufficient. It does not mention return values, units, formula basis, or limitations, leaving the agent with incomplete information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, meaning the description adds no clarification for parameters. 'Lifestyle' vaguely maps to 'indoor' but is not explained. The description fails to compensate for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Calculate daily cat food quantity based on weight, age and lifestyle', specifying the verb, resource, and key input dimensions. It explicitly distinguishes from dog food tools by the word 'cat'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., calculate_dog_food). The description lacks context on prerequisites or exclusions, which is critical given many sibling calculation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cat_pregnancyBInspect

Compute cat due date from mating date (gestation 63-67 days). Use for breeders. Inputs: mating date. Returns due date window and milestones. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
mating_dateYesMating date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description bears full burden for behavioral disclosure. It omits details such as the assumed gestation period, whether inputs are validated, or that the output is an estimated due date. The tool's internal logic (e.g., average 63-65 days) is not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence with no redundant information. It is efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is adequate but lacks details on the return value format or any assumptions. It could be more complete by noting the output format (e.g., 'Returns due date as YYYY-MM-DD').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description for the parameter is provided. However, the overall description adds no extra meaning beyond the schema; it merely states the purpose. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates cat due date from mating date, specifying the verb and resource. It distinguishes itself from sibling tools like calculate_dog_pregnancy or calculate_pregnancy_due_date by explicitly mentioning 'cat'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, when-not-to-use, or comparisons to other breeding-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cheque_repasCInspect

Calculate Belgian meal voucher (cheque-repas / maaltijdcheque) benefit. Returns: {face_value_per_voucher, employer_contribution_per_voucher, employee_contribution_per_voucher, monthly_total_vouchers, monthly_employer_cost, monthly_employee_contribution, ...}. See list_bundles for related 'finance-belgique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
days_per_monthNoWorking days per month
employee_contributionNoEmployee contribution per voucher (min 1.09 EUR)
employer_contributionNoEmployer contribution per voucher (max 6.91 EUR)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits beyond the basic purpose. For a calculation tool, it is likely read-only and non-destructive, but this is not stated. The description fails to add value beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence with no redundant words. It is front-loaded with the key action. However, it could be slightly more informative without adding length (e.g., mentioning output).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should explain what the tool returns (e.g., total voucher value, employer/employee contributions). It does not, leaving the agent to infer. The sibling context is large but not leveraged.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters, so the baseline is 3. The tool description does not add any additional meaning beyond the schema, but the schema itself is adequate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Belgian meal voucher benefit, using a specific verb and resource. It distinguishes from siblings by mentioning the specific country and type of benefit, but could be more precise about what 'benefit' means (e.g., total value, monthly amount).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other Belgian calculation tools (e.g., Belgian salary, income tax). No context about prerequisites or exclusion criteria is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_child_supportAInspect

Estimate French child support (pension alimentaire) based on income, custody and number of children. Returns: {income, rate_pct, monthly_support, annual_support}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
incomeYesNet monthly income of the paying parent in euros
custodyNoCustody type: full (garde principale), alternating (alternee), reduced (visite et hebergement)full
children_countYesNumber of children (1-6)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explains the tool estimates based on income, custody, and children count, which gives some behavioral insight. However, it does not disclose the nature of the calculation (e.g., exact formula, assumptions, or output format).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence with no redundant words. It is front-loaded with the tool's core purpose and immediately conveys the key inputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers the inputs and purpose, but lacks any indication of the output format or units (e.g., monthly amount in euros). Given no output schema, the description could be more complete to set user expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, with detailed parameter descriptions. The description merely lists the same parameters without adding new meaning or context beyond what the schema already provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates French child support (pension alimentaire), which is a specific and distinct function among many sibling calculation tools. It names key inputs (income, custody, number of children) and uses a precise verb-resource combination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives, nor does it mention prerequisites or when not to use it. It merely states what it does without context about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_chinese_zodiacBInspect

Determine Chinese zodiac animal and element from birth year. Returns: {full_sign}. See list_bundles for related 'fun' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
birth_yearYesBirth year

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It only states purpose and input, with no disclosure of behavioral traits (e.g., return format, side effects, auth needs).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with purpose, no wasted words. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple with one param, but no output schema. Description implies returns both animal and element but lacks format details. Adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds 'from birth year' which mirrors the schema. No additional meaning beyond the schema, earning baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool determines Chinese zodiac animal and element from a birth year. It uses a specific verb and resource, and is distinct from sibling tools (no other zodiac tool).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Sibling tools are numerous but unrelated; however, no explicit usage context or when-not-to-use is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_churn_rateDInspect

Compute customer or revenue churn rate over a period. Use for SaaS retention analysis. Inputs: starting customers, churned, period length. Returns churn %, retention %, and annualized rate. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
period_monthsNoPeriod in months
lost_customersYesCustomers lost
start_customersYesCustomers at period start

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It fails to mention that the tool is read-only, returns a number, or any constraints (e.g., that churn rate is lost_customers / start_customers * (12 / period_months) for annualization). No behavioral traits are revealed beyond the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a short phrase, but it is under-specified rather than concise. It omits essential information that would fit in a single sentence, such as the formula or output format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 parameters, no output schema, no annotations), the description still fails to provide a complete understanding. An agent would not know what the tool returns, the calculation formula, or any edge cases, making it inadequate for reliable invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional meaning beyond what the schema provides, but meets the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Customer churn rate' is a tautology, merely restating the tool name without providing a verb phrase indicating action. It does not specify that the tool calculates or outputs a rate, nor does it differentiate from sibling tools like calculate_burn_rate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives (e.g., when to calculate churn vs. other metrics). There are no prerequisites, context hints, or exclusion conditions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_clothing_size_convertBInspect

Convert clothing size between EU, US and UK systems. Returns: {original}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sexYesSex
sizeYesSize number in source system
garmentYesType of garment
from_systemYesSource system

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description does not disclose how edge cases (e.g., missing conversions, half sizes) are handled, nor does it mention result format or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key information. Efficient but could be slightly expanded without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, no annotations. The description is too minimal for a conversion tool with multiple systems, garment types, and sex. Lacks details on fallback behavior or supported size ranges.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%; each parameter has a description. The tool description adds no additional semantic value beyond what is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (convert), resource (clothing size), and target systems (EU, US, UK). It effectively distinguishes from sibling converters like shoe or bra size.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other size converters. Does not specify exclusions or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_compost_volumeCInspect

Compute compost volume produced from kitchen and garden waste over a year. Use for compost bin sizing. Inputs: household size, garden size m². Returns L/year and recommended bin volume. See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
depth_cmNoCompost layer depth in centimeters (default 5cm)
surface_m2YesSurface area in square meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose any behavioral traits beyond the basic calculation. It does not mention that it is a read-only operation, nor does it specify output units or any constraints. With no annotations, the description carries full burden but adds minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no unnecessary words. However, it lacks structure such as specifying output format or units, which would be useful for an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the schema covers parameters fully, the description is mostly adequate for a simple calculator. However, it does not specify the output units or format (e.g., cubic meters, kilograms), which could lead to ambiguity. Without an output schema, this is a gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides full descriptions for both parameters (surface_m2 and depth_cm), achieving 100% coverage. The description does not add any additional meaning beyond what is already in the schema, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates volume and weight of compost for a garden surface, which is specific and distinct from many sibling calculators. However, it does not explicitly differentiate from similar tools like calculate_garden_soil, leaving ambiguity when multiple garden-related calculators exist.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., calculate_garden_soil for soil). The description only states what it does, not when or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_compound_interestBInspect

Compute compound interest growth A=P(1+r/n)^(nt). Use for savings, retirement projections, investment forecasting. Returns final amount, total interest, and yearly breakdown. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsYesInvestment duration in years
principalYesInitial amount
annual_rateYesAnnual interest rate in %
compounds_per_yearNoCompounding frequency per year

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the underlying formula, which gives some transparency about the calculation. However, it does not mention any behavioral traits such as rounding, precision, or handling of edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with the formula, no unnecessary words. Extremely concise and directly to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator with no output schema, the description provides essential context (the formula). It could mention the return value (future value) but is largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all parameters. The description adds the formula but does not significantly enhance understanding beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates compound interest and provides the formula. However, it does not differentiate from similar tools like calculate_simple_interest or calculate_compound_interest_monthly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It does not specify prerequisites or situations where this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_compound_interest_monthlyCInspect

Compute compound interest with monthly contributions (savings plan). Use for systematic savers. Inputs: initial amount, monthly contribution, annual rate %, years. Returns final value, total contributed, total interest. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsYesNumber of years
principalYesInitial capital EUR
annual_rateYesAnnual interest rate percent
monthly_contributionYesMonthly contribution EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. It does not state whether the compounding frequency is monthly, what the output format is, or any assumptions or limitations (e.g., rate format). This lack of detail reduces transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, which is efficient. However, it could be slightly more informative without sacrificing conciseness, such as specifying the compounding frequency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description is too minimal. It does not explain the calculation formula, the return value, or handle edge cases (e.g., zero rate). This leaves significant gaps for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with each parameter described (e.g., 'Initial capital EUR', 'Annual interest rate percent'). The description adds no additional meaning beyond summarizing the function, so the default score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates final amount with monthly contributions and compound interest. It specifies the verb 'calculate' and the resource 'final amount', making the purpose understandable. However, it does not differentiate from the sibling tool 'calculate_compound_interest' which may lack monthly contributions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_compound_interest' or other financial calculators. There is no mention of context, prerequisites, or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_concrete_mixCInspect

Compute cement, sand, gravel, and water for a given concrete volume (NF DTU 21). Use for construction projects. Inputs: volume m³, mix ratio. Returns weights of each ingredient. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
volume_m3YesVolume in m³

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility for behavioral disclosure. It only states the tool calculates ingredients but does not describe what is returned (e.g., list of materials, ratios), side effects (none presumed), or any constraints. Significant gaps remain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence without waste. It is front-loaded with the core action. However, it could include more structured details without harming brevity, hence a 4 rather than 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter, no output schema, and no annotations, the description is minimally adequate but does not fully specify output format or behavior (e.g., assumed standard mix ratio). A score of 3 reflects that it works but leaves ambiguity for complex use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter 'volume_m3' with a clear meaning 'Volume in m³'. The description adds no additional information beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate concrete ingredients for a given volume' clearly states the verb (calculate), resource (concrete ingredients), and condition (given volume). It distinguishes from sibling tools like 'calculate_concrete_stairs' or other calculators, though not explicitly. A score of 4 reflects good clarity without redundant differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives, such as other concrete-related calculators. It does not mention prerequisites, exclusions, or context. This leaves the agent without decision support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_concrete_stairsBInspect

Calculate concrete stair dimensions, volume and materials using Blondel's formula. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
width_mNoStair width in meters (default 0.9m)
height_mYesTotal stair height to climb in meters
num_stepsYesNumber of steps
thickness_cmNoSlab thickness under each tread in cm (default 15cm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It mentions using Blondel's formula but does not explain the output format, constraints on inputs (e.g., maximum steps), or any side effects. The description lacks details about what the tool returns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is one sentence that efficiently conveys the tool's purpose and method. It is front-loaded with the key action 'Calculate'. However, it could be slightly more structured by separating the output types.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no output schema), the description fails to specify what outputs (dimensions, volume, materials) are provided or how they are calculated. Missing details about return values and constraints beyond schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so each parameter already has a description. The tool description adds no additional meaning beyond referencing Blondel's formula, which does not elaborate on parameter relationships or usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates concrete stair dimensions, volume, and materials using Blondel's formula. It specifies the resource (concrete stairs) and methodology, distinguishing it from a generic staircase calculator like calculate_staircase.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for concrete stair calculations but gives no explicit guidance on when to prefer this tool over alternatives (e.g., calculate_staircase) or when not to use it. No alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_condominium_chargesCInspect

Compute one owner's share of condominium charges from the budget and tantièmes. Use for syndic or owner verification. Inputs: total budget, tantièmes-owned, total tantièmes. Returns annual and monthly share. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
total_chargesYesTotal annual condominium charges EUR
ownership_share_pctYesOwnership share (tantièmes) percent

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must itself disclose behavioral traits. It only states 'calculate individual share' without confirming it is a pure computation (no side effects), what it returns, or how it handles edge cases (e.g., division by zero). The agent must assume it is a simple multiplication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, direct sentence with no fluff. It is appropriately sized for such a simple tool, though expanding slightly could improve clarity without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description omits essential output context—there is no output schema, and it does not state that the result is a numeric share in EUR (the same unit as 'total_charges'). For a calculation tool, return type and units are important for safe usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage (both parameters described). The description adds no new information beyond the schema's parameter descriptions, so it meets the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's verb ('Calculate') and resource ('individual share of condominium charges'), making its purpose evident. It differentiates from siblings like 'calculate_belgian_vat' or 'calculate_bmi' by specifying the domain. However, it could be improved by explicitly mentioning the formula (e.g., 'based on total charges and ownership percentage').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The sibling tools list is extensive, but the description does not include any when/not/alternative recommendations, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_coneBInspect

Compute cone volume V=(1/3)πr²h and lateral/total surface area. Use for geometry or container design. Inputs: radius, height. Returns volume and areas. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
heightYesHeight
radiusYesBase radius

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden for behavioral disclosure. It does not mention that the tool is read-only, what it returns, or any side effects. For a calculator, the behavior is partially implied but not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only five words, with no superfluous content. While efficient, it could benefit from a full sentence format for improved readability. Still, it earns a 4 for lack of waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple geometry calculator with two numeric parameters and no output schema, the description is minimally adequate. It lacks information about return format, units, or edge cases, but given the tool's simplicity, it just meets the threshold.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides descriptions for both parameters ('Height' and 'Base radius') with 100% coverage. The description adds no additional meaning beyond the schema, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Cone volume and surface area' clearly states the tool's purpose: it computes volume and surface area of a cone. It is specific about the geometric shape and the outputs, distinguishing it from sibling tools like calculate_cylinder or calculate_sphere.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusions. The agent is left to infer usage from the tool name and siblings, which is insufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_confidence_intervalCInspect

Compute confidence interval for a sample mean. Use for statistics, A/B test results, or polling. Inputs: mean, std dev, sample size, confidence (90/95/99%). Returns CI lower/upper bounds. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
std_devYesStandard deviation
confidenceNoConfidence level95
sample_meanYesSample mean
sample_sizeYesSample size

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states the purpose but fails to mention key behaviors such as: the required distributional assumptions (e.g., normal distribution), whether the interval is two-sided, how missing parameters affect computation, or the output format (e.g., lower and upper bounds). This omission is significant for a statistical tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (4 words), which is concise but not sufficiently informative. It lacks structure and fails to front-load critical details that would help an agent quickly understand the tool's function and limitations. The brevity here sacrifices clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of calculating confidence intervals, the absence of an output schema and annotations, the description is severely incomplete. It does not explain the statistical assumptions, the formula used, or the returned values (e.g., confidence bounds). An agent cannot reliably use this tool without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with basic descriptions (e.g., 'Sample mean', 'Standard deviation', 'Confidence level' with an enum). The description adds no further meaning beyond the schema, so it meets the baseline expectation but does not enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Confidence interval for a mean' clearly states the tool's purpose: calculating a confidence interval for a sample mean. It distinguishes itself from the many sibling tools (e.g., calculate_z_score, calculate_statistics) by specifying the exact operation. However, it could be more precise about the type of interval (e.g., z-interval or t-interval) and the underlying assumptions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other statistical tools (e.g., hypothesis testing, z-score calculation). The description lacks any context about prerequisites, typical use cases, or alternatives, leaving the agent to infer usage without support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cooking_conversionBInspect

Convert recipe quantities between cups, ml, grams, oz, tbsp, tsp. Use for international recipe translation. Inputs: value, from, to, ingredient (for density). Returns: {original}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to convert
to_unitYesTarget unit
from_unitYesSource unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits beyond the basic purpose. It does not mention output format, rounding, precision, or handling of invalid inputs. Since annotations are absent, the description carries the full burden but offers minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the purpose without wasted words. However, it is borderline under-specified, missing structured elements like supported unit lists or result format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (3 parameters, no output schema), the description covers the basic purpose but lacks details on output format and behavioral edge cases. The absence of an output schema and annotations means the agent may be uncertain about the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add extra meaning beyond the schema; it merely restates the purpose. The schema already defines the three parameters (amount, from_unit, to_unit) with types and enums, so the description is not required to compensate for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Convert cooking measurements between common units' clearly states the verb (Convert) and the specific resource (cooking measurements), effectively distinguishing it from sibling tools like calculate_baking_conversion (which likely handles ingredient substitutions or oven temperatures) and convert_volume/convert_weight (general conversions not limited to cooking).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for cooking unit conversions but provides no explicit guidance on when to use this tool versus alternatives (e.g., convert_volume for non-cooking contexts). It does not mention exclusions or prerequisites, leaving the agent to infer from the enum parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cooking_timeCInspect

Estimate cooking time for meat, fish, or vegetables based on weight, method, and doneness. Use for kitchen planning. Inputs: food type, weight g, cooking method (oven/grill/sous-vide), doneness. Returns time min and target internal temp. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
foodYes
methodYes
weight_kgYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It does not disclose behavioral traits such as assumptions about doneness, output format, or limitations. The brief description is insufficient for understanding what the tool does beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded. However, it is too short to be fully effective; adding a bit more detail would not harm conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's three parameters and lack of output schema or annotations, the description is incomplete. It omits what the output is, potential edge cases (e.g., missing temperature), and how the result should be interpreted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description should compensate, but it only lists the parameter names without adding meaning. It does not explain the enum values (e.g., what 'oven' implies) or the weight unit, leaving an AI agent to infer.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (estimate), the resource (cooking time), and the inputs (food type, weight, method). It distinguishes itself from sibling tools like calculate_meat_cooking_time by focusing on general cooking time estimation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as calculate_meat_cooking_time or calculate_cooking_conversion. There are no exclusions or context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cost_per_useCInspect

Compute the cost-per-use of a purchase to evaluate value. Use for buying decisions on durable goods or subscriptions. Inputs: purchase price, expected uses or years. Returns cost per use and break-even use count. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
item_priceYesItem purchase price
expected_usesYesExpected number of uses

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It fails to disclose behavioral traits such as the exact calculation formula (division), handling of edge cases (e.g., zero expected_uses), return type, or any limitations. Only the basic operation is implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with a single sentence that captures the core function. No superfluous information. However, it could be slightly more informative without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool, the description explains the purpose but omits the output (a cost per use value). Given no output schema, the description should at least mention the result. Overall, adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters have descriptions). The description adds no additional meaning beyond the schema; it does not clarify units, format, or usage tips. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates cost per use for purchase evaluation. It is specific enough to distinguish from many generic calculate_* siblings, though 'calculate_unit_price' or 'calculate_cost_price' might overlap in use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like calculate_unit_price or calculate_cost_per_serving. The description does not mention appropriate contexts, prerequisites, or when to avoid this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cost_priceAInspect

Calculate unit cost price from raw materials, labor, and overhead. Returns: {total_cost}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
laborYesLabor cost
overheadYesOverhead/indirect costs
quantityYesNumber of units produced
raw_materialsYesRaw material cost

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description does not disclose any behavioral traits beyond the calculation itself. Since it is a pure computation, the lack of side effects is expected, but the description could mention that it returns a numeric value or formula details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of eight words, with no filler. Every word is necessary and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (4 parameters, no output schema), the description is sufficient to understand its purpose and operation. It could explain the formula (sum of costs divided by quantity) but is not strictly required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with descriptions, so the description adds no additional meaning beyond what is already in the schema. The baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates unit cost price from raw materials, labor, and overhead. The verb 'calculate' and resource 'unit cost price' are specific and distinguishable among many sibling calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternative cost-related calculators (e.g., calculate_markup_margin, calculate_profit_margin). The description implies its use for cost price calculation but lacks explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_crop_factorBInspect

Calculate camera crop factor and equivalent focal length based on sensor width. See list_bundles for related 'photographie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sensor_width_mmYesCamera sensor width in millimeters (full frame = 36mm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It states what the tool calculates but does not disclose other details like return values or any assumptions. Given the simplicity, this is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that conveys the core functionality without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool claims to compute both crop factor and equivalent focal length, but the input schema only includes sensor width. To compute equivalent focal length, a lens focal length would be needed, which is missing. This makes the description misleading about the tool's capabilities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a clear description of sensor_width_mm including an example. The description does not add significant new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates camera crop factor and equivalent focal length based on sensor width. However, it does not explicitly differentiate from sibling tools like calculate_hyperfocal_distance or calculate_depth_of_field, which are also camera-related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as calculate_hyperfocal_distance or calculate_depth_of_field. There is no mention of prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_crypto_profit_lossAInspect

Compute crypto trading profit/loss including fees. Use for crypto investors tracking realized P&L. Inputs: buy price, sell price, quantity, buy fee, sell fee. Returns net P&L, ROI %, break-even price. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
quantityYesQuantity of cryptocurrency traded
buy_priceYesPurchase price per unit in fiat currency
sell_priceYesSale price per unit in fiat currency
buy_fee_pctNoBuy transaction fee percentage (default 0.1%)
sell_fee_pctNoSell transaction fee percentage (default 0.1%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description is the sole source of behavioral info. It correctly indicates the tool performs a calculation, but it does not disclose the output format (e.g., returns a number representing profit/loss) or any limitations. The description broadly captures the operation but lacks granular behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence of 14 words, front-loading the core purpose. Every word contributes meaning without redundancy or excess.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description is minimal. It explains the purpose and parameters adequately but does not specify the return value type or provide examples. For a financial calculation tool, additional context (e.g., formula, return format) would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already explains each parameter. The description adds 'including trading fees', which is reflected in the fee parameters, but it does not provide additional semantic meaning beyond what the schema offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates profit or loss on a cryptocurrency trade including trading fees. It uses a specific verb-resource pair and distinguishes itself from sibling tools like 'calculate_crypto_tax_fr', which focuses on tax calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests using this tool for single trade profit/loss calculations including fees, but it does not explicitly state when to use it versus alternatives (e.g., calculate_crypto_tax_fr) or provide any exclusions or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_crypto_tax_frBInspect

Calculate French flat tax (30% PFU) on cryptocurrency capital gains at withdrawal. Returns: {gain_ratio}. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
total_gains_eurYesTotal unrealized gains in the portfolio in EUR
withdrawal_amount_eurYesAmount being withdrawn/sold in EUR
total_portfolio_value_eurYesTotal current portfolio value in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states what the tool calculates without mentioning prerequisites (e.g., realized gains), edge cases, or limitations. Beyond the core calculation, there is no insight into assumptions or rate applicability.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-structured sentence that concisely conveys the tool's purpose without superfluous text. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should clarify what it returns (e.g., tax amount or net value). It fails to describe the result format or any assumptions (tax year, rate exceptions). Critical information is missing for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for all 3 required parameters, each already described in the schema. The description adds no extra meaning beyond the schema, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb 'Calculate', the resource 'French flat tax (30% PFU) on cryptocurrency capital gains', and the condition 'at withdrawal'. It distinguishes itself from sibling tools like calculate_crypto_profit_loss by specifying tax calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for French crypto tax calculation but provides no explicit guidance on when to use this tool versus alternatives, nor any when-not-to-use conditions. Sibling tools include many tax calculators, yet no differentiation is mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_currency_cross_rateAInspect

Calculate cross exchange rate between two currencies via USD. Returns: {cross_rate_a_to_b, cross_rate_b_to_a}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rate_a_usdYesUnits of currency A per 1 USD
rate_b_usdYesUnits of currency B per 1 USD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only mentions 'via USD' without explaining the calculation method (e.g., cross rate = rate_a_usd / rate_b_usd) or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is one sentence, front-loaded with the core action, and contains no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool with no output schema, the description sufficiently conveys the purpose and method. No additional context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds 'via USD' context but does not significantly enhance parameter understanding. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates cross exchange rates between two currencies via USD, using a specific verb and resource. It is distinct from sibling tools like calculate_currency_exchange.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not provide context for usage or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_currency_exchangeBInspect

Calculate currency exchange with bank margin and show fees lost. Returns: {amount_source, from_rate_vs_usd, to_rate_vs_usd, mid_market_amount, amount_after_margin, fees_lost, ...}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to exchange in source currency
to_rateYesTarget currency rate vs USD (e.g. JPY=150)
from_rateYesSource currency rate vs USD (e.g. EUR=1.08)
bank_margin_pctNoBank/exchange margin percentage (default 2.5%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only mentions 'show fees lost', implying output but not detailing mutability, side effects, or return behavior. It does not disclose whether the tool is read-only or if it has other traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence with no redundancy, efficiently conveying the tool's action. However, it is very brief and could benefit from additional structuring for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description partially covers the tool's purpose but lacks details on return values, edge cases, or how to interpret results. Complete enough for a simple calculation but could be more helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds mention of 'bank margin' and 'fees lost', which hints at the result but does not enrich parameter meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and the resource 'currency exchange', including specifics like 'bank margin' and 'fees lost'. It distinguishes itself from sibling tools by focusing on margin and fee calculation, making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like calculate_currency_cross_rate or calculate_exchange_margin. The description does not specify prerequisites, contexts, or exclusion criteria, leaving the agent to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_curtain_fabricCInspect

Compute fabric meters needed for curtains, including hems and pattern repeat. Use for sewing. Inputs: window dimensions, fullness, pattern repeat. Returns fabric length to buy. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
num_panelsNoNumber of curtain panels
fullness_ratioNoFullness ratio (2 = double fullness)
window_width_cmYesWindow width cm
window_height_cmYesWindow height cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior, but it only restates the tool's name. It does not mention what the calculation includes (e.g., hems, pattern repeats), required units, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no fluff, but it sacrifices informativeness. It is appropriately concise but barely adds value beyond the name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description is incomplete. It fails to explain the return value, calculation method, or differentiate from similar tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The tool description adds no extra meaning beyond what the schema already provides, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate fabric needed for curtains' clearly states the tool's purpose with a specific verb and resource. However, it does not differentiate from sibling tools like 'calculate_curtain_width' or 'calculate_fabric_needed', which have overlapping scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as when to calculate fabric vs curtain width. The description lacks any context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_curtain_widthCInspect

Compute total curtain width using a fullness factor (1.5-3.0×). Use for window dressing. Inputs: window width m, fullness ratio. Returns total fabric width. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
fullnessNoFullnessstandard
window_cmYesWindow width cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must fully disclose behavioral information. It fails to mention that this is a pure calculation with no side effects, or any constraints such as minimum window width.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence that is under-specified. While concise, it lacks necessary detail, making it more spartan than efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or additional context, the description is incomplete. It does not explain the result or formula, leaving the agent without enough information to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds minimal extra meaning beyond the schema, only hinting at the role of fullness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates curtain width based on fullness, providing a specific verb and resource. However, it does not distinguish this tool from sibling 'calculate_curtain_fabric', which may cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives or any prerequisites. The description lacks context for appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cycling_powerBInspect

Estimate cycling power output considering gradient, speed and total mass. Returns: {power_watts, watts_per_kg}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
speed_kmhYesSpeed in km/h
weight_kgYesRider weight in kilograms
gradient_pctYesRoad gradient in percent (positive = uphill)
bike_weight_kgNoBike weight in kilograms

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states the function without mentioning assumptions, formulas, side effects, output type, or limitations. This is insufficient for a tool with no structured behavioral metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. However, it lacks structured formatting (e.g., bullet points) that could improve clarity. Appropriate length for a simple tool, but could be enhanced.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters and no output schema, the description should clarify the return value (e.g., power in watts), calculation assumptions, and any relevant context. It fails to do so, leaving the agent uncertain about the tool's output and usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so the baseline is 3. The description adds minimal value by naming 'total mass' (implying rider + bike weight) but doesn't provide essential context beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Estimate') and clearly identifies the resource ('cycling power output'). It lists key input factors (gradient, speed, total mass), making the tool's purpose unambiguous and distinct from the many sibling calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. No mention of constraints, prerequisites, or scenarios where this tool is appropriate or not.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_cylinderCInspect

Compute cylinder volume V=πr²h and surface area A=2πr(r+h). Use for tanks, pipes, or containers. Inputs: radius, height. Returns volume and areas. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
heightYesHeight
radiusYesRadius

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states the outputs, not any assumptions (e.g., right circular cylinder), precision, or units. This is insufficient for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short, but it is a noun phrase rather than a proper sentence. While concise, it could be better structured (e.g., 'Calculate the volume and surface area of a cylinder').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a simple calculator with no output schema, the description lacks details on return format, units, or whether both volume and area are returned together. This is incomplete for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes both parameters (radius and height) with 100% coverage. The description adds no extra meaning beyond that, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Cylinder volume and surface area' clearly identifies the resource (cylinder) and the outputs (volume and surface area). It distinguishes from siblings like calculate_cone and calculate_sphere, though it lacks a verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Without any context about prerequisites or exclusions, the agent has no usage direction beyond the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_daily_proteinCInspect

Calculate recommended daily protein intake based on weight and fitness goal. Returns: {rate_g_per_kg, daily_protein_g, calories_from_protein}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
goalYesFitness goal
weight_kgYesBody weight in kilograms

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description does not disclose the formula, assumptions (e.g., standard protein recommendations), or limitations. The agent cannot infer the underlying model or whether it accounts for activity level beyond the enum values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that immediately conveys the action and required inputs. No superfluous words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, and the description does not specify return format (e.g., grams of protein). Additional context about output units or interpretation is missing. For a simple calculation tool, the description is inadequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so both parameters are already explained. The description adds no new meaning beyond summarizing 'weight and fitness goal.' Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Calculate recommended daily protein intake' with a specific verb and resource. It distinguishes from sibling tools like calculate_bmr or calculate_daily_vitamins, but does not explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as calculate_bmr or calculate_tdee. The description lacks context like 'for dietary planning' or 'instead of general calorie calculations'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_daily_vitaminsBInspect

Check daily vitamin and mineral intake against RDA recommendations. Use for nutrition tracking. Inputs: list of foods with quantities. Returns % RDA per nutrient and deficiencies. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYes
sexYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations; description only states returns RDA without detailing behavior like input validation, output format, or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 14-word sentence efficiently conveys purpose and key parameters, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description fails to explain return structure, which is critical for agent to use results; minimal context given tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, description mentions 'by age and sex' matching parameters but lacks details like age format or sex enum values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns RDA values for vitamins/minerals based on age and sex, using specific verb and resource, distinguishing it from numerous calculation siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives, no conditions or exclusions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_data_transfer_timeCInspect

Calculate file transfer time at a given connection speed. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
speed_mbpsYesConnection speed in Mbps
file_size_gbYesFile size in GB

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose all behavioral traits. It does not mention that the calculation is a simple division (file size / speed), any assumptions about overhead, or the resulting time unit. This leaves agents without crucial context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that concisely conveys the tool's purpose without unnecessary words. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the simplicity of the tool, no output format or return value is mentioned. Without annotations or an output schema, the description should clarify what the result represents (e.g., time in seconds) for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides clear descriptions for both parameters (file_size_gb and speed_mbps) with 100% coverage. The description adds no additional meaning beyond what the schema already conveys, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates file transfer time based on connection speed. It is specific and distinct from sibling calculators like calculate_speed_distance_time, which deals with general speed/distance/time.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other similar calculators (e.g., calculate_data_storage for storage units, or convert_speed for speed conversion). The description lacks any context for usage selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_day_of_weekBInspect

Find the day of the week for any date (Zeller's congruence). Use for historical dates or birthday checks. Inputs: day, month, year. Returns weekday name. See list_bundles for related 'fun' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYesDate in YYYY-MM-DD format

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It only states 'find', implying a read operation, but does not specify output format (e.g., day name or number), date range restrictions, or any side effects. This is insufficient for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It is concise and immediately conveys the tool's function, though it could slightly expand on output without losing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is minimally viable but incomplete. It lacks return value details (e.g., 'Returns the day name') and edge case handling, which would make it more useful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the one parameter 'date' is described as 'Date in YYYY-MM-DD format'). The tool description adds no further semantic meaning, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Find the day of the week for any date' clearly states the tool's purpose with a specific verb and resource. It effectively distinguishes from sibling tools like calculate_age or calculate_days_between, which handle other date calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternative date-related tools. No explicit context, exclusions, or comparisons to siblings are given, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_days_betweenCInspect

Calculate days, weeks, approximate months and working days between two dates. Returns: {weeks, months_approx, working_days}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateYesYYYY-MM-DD — End date
start_dateYesYYYY-MM-DD — Start date

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behaviors. It only lists output types without explaining key details such as how 'approximate months' are computed, how working days are defined (e.g., holidays ignored), or what happens if end_date precedes start_date. This lack of transparency could lead to misuse.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is efficiently front-loaded with the verb 'Calculate'. However, it could be slightly more structured to list constraints or return format without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (multiple output types) and lack of output schema, the description should ideally hint at the return structure or unit handling. It is adequate for a simple tool but lacks completeness in describing the output format and edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides 100% coverage with clear parameter descriptions including format YYYY-MM-DD. The description adds no new semantics beyond the schema's own descriptions, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates days, weeks, months, and working days between two dates. It uses a specific verb and resource combination, and while there is a sibling tool named 'calculate_working_days', the description does not differentiate the scope, but it is still clear enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'calculate_working_days' or other date-related tools. There is no mention of prerequisites, exclusions, or context for best use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_debt_capacityAInspect

Calculate maximum loan capacity using French HCSF 35% debt ratio rule. Returns: {max_monthly_payment, max_loan, note}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rateNoAnnual interest rate in % (default 3.5)
duration_yearsNoLoan duration in years (default 25)
existing_debtsNoExisting monthly debt payments in EUR (default 0)
monthly_incomeYesNet monthly income in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as side effects, return value structure, or data usage. It only says 'calculate', which is insufficient for a tool that likely has no side effects but needs confirmation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose. Every word is necessary, with no redundancy or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters, no output schema, and no annotations, the description is minimal. It covers the core purpose but lacks explanation of return values, assumptions, or edge cases. It is adequate but incomplete for an AI agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so baseline is 3. The description adds no extra semantic meaning beyond the schema; it mentions the rule but does not explain how each parameter is used in the calculation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates maximum loan capacity using the French HCSF 35% debt ratio rule, with a specific verb and resource. It distinguishes itself from numerous sibling financial tools by referencing a specific French regulatory rule.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for French debt ratio calculations but does not explicitly state when to use this tool versus alternatives like calculate_loan_payment or calculate_mortgage. No when-to-use or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_debt_service_ratioBInspect

Calculate debt-to-income ratio and maximum additional loan capacity. Returns: {ratio_pct, max_monthly_debt_35pct, max_additional_loan_payment, status}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthly_debtsYesExisting monthly debt payments EUR
monthly_incomeYesNet monthly income EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose behavioral details such as whether the ratio is expressed as a percentage or a fraction, what formula is used, or if any assumptions apply (e.g., housing debt included). The agent only learns the broad purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence, but it could benefit from a brief note about the output format or key assumptions. It's concise but not optimally structured for quick parsing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should clarify return values (e.g., does it return a single ratio or two separate numbers?) and any constraints (e.g., maximum ratio thresholds). It lacks this information, making it incomplete for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema's parameter descriptions; it merely restates the overall goal. No parameter-specific guidance is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Calculate debt-to-income ratio and maximum additional loan capacity,' specifying the exact financial metrics and distinguishing it from sibling tools like calculate_debt_capacity or calculate_debt_to_income.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for assessing loan capacity, but provides no explicit guidance on when to use this tool vs. alternatives like calculate_debt_capacity or calculate_loan_to_value. No exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_debt_to_incomeCInspect

Compute debt-to-income (DTI) ratio. Use for mortgage qualification or financial health checks. Inputs: monthly debt, monthly gross income. Returns DTI % and risk category. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthly_debtYesTotal monthly debt payments
monthly_incomeYesGross monthly income

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavioral traits. It only states the calculation without mentioning side effects, permissions, return format, or edge cases (e.g., zero income). The agent receives no insight into output structure or assumptions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise and directly states the purpose. However, it lacks any structured breakdown of parameters or output, and the brevity sacrifices completeness for simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and only two parameters, the description is incomplete. It fails to explain the calculation formula, the expected output (e.g., a ratio as a percentage or decimal), or any validation rules (e.g., handling of zero income). This leaves gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters (monthly_debt and monthly_income). The description adds no additional parameter meaning beyond what the schema already provides, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate debt-to-income ratio' clarifies what the tool does, but it is essentially a restatement of the tool's name. It does not differentiate this tool from sibling financial calculators like 'calculate_debt_capacity' or 'calculate_debt_service_ratio', providing no additional context on its specific role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Given many sibling calculators, an agent would have no information about scenarios that favor this tool, such as evaluating mortgage eligibility or personal finance health.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_delivery_costBInspect

Estimate shipping cost from weight, distance and service (standard vs express). Returns: {cost_eur, formula, note}. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoService levelstandard
weight_kgYesPackage weight kg
distance_kmYesDelivery distance km

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does not mention whether the cost is an approximation, what currency is used, any assumptions, or side effects. The agent is left uninformed about the output format or potential limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence of 12 words, front-loaded with the action and resource. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema, the description should at least hint at the return value (e.g., estimated cost in a specific currency). It fails to provide this, leaving a gap in agent understanding of the tool's output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description merely restates the parameters (weight, distance, service) without adding deeper meaning, units, or validations beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Estimate', the resource 'shipping cost', and the inputs (weight, distance, service). It effectively distinguishes the tool from siblings like 'calculate_international_shipping' by specifying the service option 'standard vs express'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus alternatives like 'calculate_international_shipping'. The description does not mention any prerequisites, exclusions, or preferred contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_densityCInspect

Compute density, mass, or volume given the other two. ρ=m/V. Use for materials, chemistry, fluid dynamics. Inputs: any 2 of (mass, volume, density). Returns the third. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
densityNokg/m³
mass_kgNoMass kg
volume_m3NoVolume m³

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose behavioral aspects such as the need for exactly two inputs, the formula used, or error handling. With no annotations, the description carries the full burden but only states the general purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (4 words) and front-loaded, but lacks essential details. It is concise but at the expense of completeness, making it insufficient for an agent to use correctly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 optional parameters and no output schema, the description is inadequate. It does not explain that two inputs are required, what happens with all three or none, or the calculation outcome. The context is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter (density in kg/m³, mass in kg, volume in m³). The description adds no semantic value beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool calculates density, mass, or volume, but does not specify which is the output given the other two inputs. It is a clear verb+resource combination but lacks precision in explaining the relationship between parameters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool over its many siblings like 'calculate_density_convert' or other physics calculators. No when-to-use or when-not-to-use information is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_density_convertBInspect

Convert density between kg/m³, g/cm³, lb/ft³, and lb/gal. Use for engineering, chemistry, fluid mechanics. Inputs: value, from-unit, to-unit. Returns converted density. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesDensity value
to_unitYesTarget unit
from_unitYesSource unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states the conversion action, failing to disclose behavioral details like whether the result is returned directly, handling of invalid inputs, or any constraints beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded with the core purpose and wastes no words. It is appropriately concise for the simple conversion task.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool with no output schema, the description is functional but does not mention the output format or return value. While the schema covers inputs, the lack of output description leaves some uncertainty, but given the simplicity, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear parameter descriptions (value, from_unit, to_unit). The description adds the set of units, confirming but not enriching the semantics beyond the schema. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and the resource 'density', listing the specific units (kg/m³, g/cm³, lb/ft³, oz/in³). This distinguishes it from siblings, which cover other conversion types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives or any prerequisites. It simply states what it does, leaving usage context entirely implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_depth_of_fieldAInspect

Calculate depth of field, near/far focus limits and hyperfocal distance for a camera lens. Returns: {near_limit_m, far_limit_m, coc_mm}. See list_bundles for related 'photographie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
apertureYesLens aperture (f-number, e.g. 2.8)
distance_mYesSubject distance in meters
focal_length_mmYesLens focal length in millimeters
sensor_width_mmNoCamera sensor width in mm (default 36 for full frame)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description only lists what is calculated without disclosing behavioral aspects such as side effects, authentication needs, or output format. For a calculation tool, this is a moderate gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that directly communicates the tool's purpose without any superfluous information. It is well-structured and efficiently conveys the key functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists the computed outputs but does not detail the return format, units, or the role of the optional parameter. Given the absence of an output schema, additional context would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all parameters with 100% coverage. The description adds no additional meaning beyond what the schema provides, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool calculates depth of field, near/far focus limits, and hyperfocal distance. This distinguishes it from sibling tools like 'calculate_hyperfocal_distance', which likely only computes hyperfocal distance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While the sibling 'calculate_hyperfocal_distance' exists, the description does not provide context or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dew_pointCInspect

Compute dew point temperature using Magnus formula. Use for HVAC, weather, comfort analysis. Inputs: temperature °C, relative humidity %. Returns dew point °C and comfort class. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
temp_cYesTemperature °C
humidity_pctYesRelative humidity %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully describe behavior. It only states the output concept without details on side effects, return format, or handling of invalid inputs. This is insufficient for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one phrase) and front-loaded. However, it provides minimal added value beyond the tool name, lacking informative content that would justify its inclusion.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 2 parameters, no output schema, and no annotations, the description fails to provide sufficient context. It omits return value type, units (assumed °C), and any behavioral constraints, making it incomplete for a minimally usable tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no additional meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Dew point temperature' clearly indicates the tool calculates dew point temperature from input parameters, which is evident from the name and schema. However, it does not differentiate itself from the many similar 'calculate_' sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With hundreds of sibling calculation tools, the lack of context for appropriate scenarios is a significant gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dice_probabilityBInspect

Calculate dice roll probability for exact values, minimum or maximum targets. Returns: {probability}. See list_bundles for related 'jeux-probabilites' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
targetYesTarget value to calculate probability for
num_diceYesNumber of dice to roll
num_sidesNoNumber of sides on each die (default d6)
comparisonYesComparison type: exact match, at least target, or at most target

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral details. It does not mention assumptions (e.g., fair dice, independent rolls), limitations, or what happens with invalid inputs. The description is minimal and lacks transparency about the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It is efficient but could benefit from slightly more detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool computes probabilities with 4 parameters and has no output schema or annotations, the description is incomplete. It does not explain the output format (e.g., decimal, percentage) or mention constraints like maximum dice/sides, leaving the agent without essential context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all parameters. The description adds context by naming 'exact values, minimum or maximum targets,' which maps to the comparison enum. However, it does not explain parameter specifics beyond what the schema already provides, so the value added is marginal.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates dice roll probability and specifies the three target types: exact, minimum, and maximum. This immediately distinguishes it from other calculator tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus other probability tools (e.g., calculate_card_draw_probability, calculate_probability_binomial) or any alternatives. No exclusions or context are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dilutionCInspect

Compute dilution C1·V1=C2·V2. Solve for any unknown. Use for chemistry, lab work, pharmacy. Inputs: any 3 of (C1, V1, C2, V2). Returns the fourth. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
c1NoInitial concentration
c2NoFinal concentration
v1NoInitial volume
v2NoFinal volume

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the burden. It only states the formula, not how the tool behaves (e.g., does it solve for the missing variable? what if multiple parameters are missing?). There is no mention of error handling, default behavior, or constraints beyond the formula.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely terse (two words plus a formula). While short, it sacrifices essential information such as what the tool returns or how to use it. Conciseness here leads to under-specification rather than efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no required fields, and no output schema, the description should explain what the tool returns (probably the missing value) and how to indicate which variable to solve for. The current description lacks this, making it incomplete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%: each parameter has a clear description (e.g., 'Initial concentration'). The description adds no extra meaning beyond 'Dilution formula C1V1=C2V2' – it does not explain how the parameters map to the formula or clarify edge cases. Baseline is 3 due to high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Dilution formula C1V1=C2V2' clearly states the tool implements a specific formula (C1V1=C2V2). The verb 'calculate' is implied via the name, and the resource is the dilution formula. Among many sibling tools with 'calculate_' prefix, this description uniquely identifies its function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It does not mention that it can solve for any missing parameter given three known values, nor does it specify typical use cases or limitations. The agent receives no context for selecting this tool over similar calculation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_discountCInspect

Calculate discounted price with optional successive discounts. Returns: {original_price, price_after_first, effective_discount_pct}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
discount_pctYesFirst discount percentage
discount2_pctNoOptional second successive discount
original_priceYesOriginal price

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states 'calculate discounted price', which implies a read-only calculation, but does not mention rounding, precision, order of successive discounts, or any constraints beyond schema limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 6 words, conveying the core purpose efficiently. It is front-loaded and contains no unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is too minimal. It does not explain the order of successive discounts (e.g., first discount_pct then discount2_pct applied sequentially on the reduced price), nor does it mention rounding or result format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds 'optional successive discount' which matches the schema's description for discount2_pct, but does not provide additional meaning beyond what the schema already conveys.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates discounted price with optional successive discounts. It distinguishes from many sibling tools by focusing on discounts, but does not explicitly differentiate from similar tools like calculate_discount_effective.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. There is a sibling tool 'calculate_discount_effective', but the description does not explain when to use one over the other.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_discount_effectiveCInspect

Compute the effective discount when stacking multiple promotions. Use for promo design or shopping comparisons. Inputs: original price, discount % list. Returns final price and effective single-discount equivalent. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
discount_1_pctYesFirst discount %
discount_2_pctNoSecond discount %
original_priceYesOriginal price

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully explain behavior, but it only states the purpose. It does not reveal how discounts are combined (sequential, additive, etc.) or what the output represents (effective percentage, final price).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (4 words), which is efficient but at the cost of clarity. It is not front-loaded with critical information, and the brevity sacrifices completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the financial nature of the tool and lack of output schema, the description fails to explain return format or how discounts are computed. Users cannot anticipate the result without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage; parameters are labeled 'First discount %', 'Second discount %', 'Original price'. The description adds no additional meaning beyond these labels, so it meets the baseline but does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Effective discount with stacked discounts' clearly indicates the tool calculates the combined effect of multiple discounts. While it's not a full sentence with a verb, the tool name provides the action. It distinguishes from simpler discount tools by mentioning 'stacked'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives like the sibling 'calculate_discount'. The description does not mention prerequisites, conditions, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_distance_2dBInspect

Compute Euclidean distance between two 2D points. Use for geometry, mapping. Formula: √((x2−x1)²+(y2−y1)²). Inputs: x1,y1,x2,y2. Returns distance. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
x1YesX1
x2YesX2
y1YesY1
y2YesY2

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose any behavioral traits beyond the basic operation. It does not specify the type of distance (e.g., Euclidean), units, or return value. This leaves the agent unaware of important details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (five words), which is appropriate for a simple mathematical operation. However, it is a fragment and lacks any structural elements like units or output description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is insufficient. It does not explain the formula (Euclidean distance), return format, or edge cases like negative coordinates. For a simple tool, it might be minimally acceptable, but more context would improve agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema descriptions are minimal ('X1', 'Y2', etc.), and the tool description adds no additional meaning. While schema coverage is 100%, the descriptions are tautological. The description does not clarify the role of each parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates the distance between two 2D points. It uses a specific verb ('Distance') and resource ('2D points'), and it distinguishes from sibling tools like calculate_distance_3d.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, but the purpose is obvious. Usage is implied by the function name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_distance_3dCInspect

Compute Euclidean distance between two 3D points. Use for 3D modeling, physics. Formula: √(Δx²+Δy²+Δz²). Inputs: x1,y1,z1,x2,y2,z2. Returns distance. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
x1Yes
x2Yes
y1Yes
y2Yes
z1Yes
z2Yes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for disclosing behavior. It merely states the purpose without any details about side effects (none expected), return format, precision, or error handling. The agent is left uninformed about what the output looks like or any constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no fluff, making it very concise. However, it lacks structure (e.g., no sections or bullet points) and is too brief to convey necessary context. This is a case of under-specification rather than efficient conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (6 numeric parameters, no output schema), the description should at least mention the return value (e.g., 'Returns the distance as a number') or any assumptions (e.g., Euclidean metric). It does not, leaving significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, and the description adds no semantic meaning to parameters like x1,y1,z1. While parameter names are somewhat self-explanatory, the description does not clarify that they represent coordinates of two 3D points. This forces the agent to rely on naming conventions, which may be ambiguous.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Distance between two 3D points', which clearly identifies the tool's function—computing Euclidean distance in 3D space. It distinguishes from siblings like calculate_distance_2d by explicitly specifying 3D. However, the description omits an action verb like 'calculate' or 'compute', relying on the tool name and context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when or why to use this tool versus alternatives. The description does not mention scope, prerequisites, or exclusions. For example, it does not differentiate from calculate_distance_2d or other distance-related tools, leaving the agent to infer usage solely from the tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_distance_securiteBInspect

Calculate safe following distance using the 2-second rule (French highway code). Returns: {safety_distance_2s_m, highway_3s_m, note}. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
speed_kmhYesVehicle speed in km/h

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the full burden. It discloses the calculation method (2-second rule) but does not specify the output unit or any side effects. The description is minimal in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no extraneous words. It is highly concise and front-loads the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter calculator, the description is adequate but lacks output unit information. An agent might need to know the result is in meters or seconds. With no output schema, this gap is notable but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the single parameter speed_kmh described in the schema. The description does not add meaning beyond the schema—it only references the rule. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: calculating safe following distance using the 2-second rule per the French highway code. It is specific, uses a verb and resource, and distinguishes itself from sibling tools like calculate_braking_distance by specifying the rule and context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided. There is no mention of when to use this tool versus alternatives such as calculate_braking_distance or other distance calculators. The description does not set context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_distance_to_horizonBInspect

Calculate the distance to the horizon from a given height. Returns: {distance_km, distance_miles, distance_nautical_miles}. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
height_mYesObserver height above ground in metres

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It does not mention assumptions (e.g., spherical Earth, standard refraction), limitations, or what the result represents.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that conveys the purpose without wasting words. It is appropriately concise for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and the description does not specify the output units (e.g., kilometers) or whether it's line-of-sight distance. This lacks completeness for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear parameter description. The tool description adds minimal extra meaning ('from a given height') beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Calculate') and resource ('distance to the horizon') with a clear input ('from a given height'). It clearly distinguishes from hundreds of sibling 'calculate_*' tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Given the large number of sibling tools, explicit usage context or alternative recommendations are missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dog_ageBInspect

Convert dog age to human-equivalent years using modern AAHA method. Use for canine health monitoring. Inputs: dog age years, breed size. Returns human-equivalent age. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeNoDog sizemedium
dog_yearsYesDog age in years

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description mentions 'modern method' but does not explain what it is or how it works. The size parameter is not mentioned, and no behavioral traits beyond the basic conversion are disclosed. No annotations exist to compensate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: 6 words, front-loaded with the core purpose. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, and the description does not specify the return value. It lacks detail on the modern method and the role of size. However, for a simple calculator with clear schema, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover both parameters (size and dog_years) with 100% coverage. The tool description adds no extra meaning beyond what the schema already provides, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: converts dog age to human years using a modern method. It distinguishes from sibling tools like calculate_cat_age and calculate_pet_age.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Many sibling tools exist for different animals and calculations, but no context is provided for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dog_foodCInspect

Calculate daily dog food quantity based on weight, age and activity level. Returns: {kcal_per_day}. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYes
activityYes
weight_kgYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description provides no behavioral details beyond the calculation itself. It does not disclose units of the result (e.g., grams vs cups), assumptions, or any side effects. The bare-minimum information does not compensate for the lack of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words, achieving conciseness. However, it is too sparse, missing essential details such as output format and usage context. It does not balance brevity with completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (3 parameters, no nested objects) and absence of an output schema, the description should explain the result (e.g., quantity unit, serving recommendations). It fails to provide a complete picture, leaving significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description should elaborate on parameter meanings. It only repeats the parameter names ('weight, age and activity level') without specifying units, valid enum values, or constraints. This adds minimal value beyond the input schema structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates daily dog food quantity based on weight, age, and activity level. The verb 'calculate' combined with the specific resource 'dog food quantity' makes the purpose unambiguous, distinguishing it from cat food or generic pet food tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives like calculate_cat_food or calculate_pet_food_portion. No explicit context for usage or exclusions is provided, leaving the agent to infer from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dog_pregnancyBInspect

Compute dog due date from mating date (gestation 63 days). Use for breeders. Inputs: mating date. Returns due date window and milestone dates. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
mating_dateYesMating date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states the transformation (mating date to due date) but does not disclose assumptions (e.g., gestation period length), limitations, or what the output format is. For a simple calculator, this is minimal but lacks context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, short sentence that is front-loaded and contains no unnecessary words. It efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a one-parameter tool with no output schema, the description is adequate but could be improved by stating the expected output (e.g., due date) or the assumed gestation period. It is minimally complete but lacks details that would help an agent use it correctly without further context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'mating_date' described as 'Mating date YYYY-MM-DD'. The description adds no extra meaning beyond the schema, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate dog due date from mating date' clearly states the verb (calculate), resource (dog due date), and input (mating date). It distinguishes from siblings like 'calculate_cat_pregnancy' and 'calculate_pregnancy_due_date' by specifying 'dog'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus similar siblings like 'calculate_breeding_due_date'. No exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dog_walking_caloriesCInspect

Compute calories burned by dog and human during a walk. Use for pet weight management. Inputs: dog weight, walk duration, pace. Returns calories burned by both. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
paceYes
duration_minYes
dog_weight_kgYes
walker_weight_kgYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must disclose behavior. It states 'during a walk' but does not explain assumptions, limitations, or whether the calculation is scientifically validated. Minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, directly to the point, no unnecessary words. Very concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 4 required parameters and no output schema, the description is too minimal. It lacks information on expected output format, range of values, or any warnings. In a server with many similar tools, this is insufficient for easy differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% with no parameter descriptions. Although parameter names are self-explanatory, the description adds no meaning beyond the names. Does not elaborate on how parameters relate to the calculation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates calories burned by both walker and dog during a walk. Among many calculate tools, this is distinct because it involves two entities. However, it doesn't specify the type of formula or the intended audience.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool compared to other calorie-related tools like calculate_calories_burned or calculate_dog_food. No alternatives or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dollar_cost_averageCInspect

Calculate DCA portfolio value and performance for recurring crypto investments. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodsYesNumber of investment periods
average_priceYesAverage purchase price per unit over all periods
current_priceYesCurrent market price per unit
investment_per_periodYesAmount invested per period in fiat currency

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavioral traits. It fails to mention that the tool assumes a known average price, not simulating individual purchases. This key assumption is not explicit, which could lead the agent to misuse the tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the core purpose. However, it could benefit from additional structure, such as listing computed outputs or noting assumptions, for better clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

There is no output schema, yet the description does not hint at what outputs are returned (e.g., total units, current value, profit/loss). The tool's complexity is moderate, and the description should at least outline the results to ensure correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no additional parameter meaning beyond what the schema already provides (e.g., units, validation rules). The schema descriptions are adequate for understanding parameter purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates DCA portfolio value and performance for recurring crypto investments. It uses a specific verb and resource, distinguishing it from sibling tools like calculate_crypto_profit_loss. However, it could be more specific about what exactly is computed (e.g., total investment, current value, profit/loss).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of use cases, limitations, or comparison to related tools like a full DCA simulator. The description lacks any context for appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dpe_energy_classAInspect

Determine French DPE energy class from primary energy consumption. Returns: {note}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
kwh_m2_yearYesPrimary energy consumption in kWh/m2/year

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. It states the purpose but does not mention output format, accuracy, assumptions, or that it's a read-only computation. Minimal disclosure beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words that front-loads the essential purpose. No wasted words, exactly as concise as needed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter deterministic calculation, the description covers the core purpose and input. It lacks explicit output details (e.g., returns letter A-G) but the name and description make it clear enough. Slightly incomplete given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already includes a clear description for the parameter ('Primary energy consumption in kWh/m2/year'), and the description adds no extra meaning. With 100% schema coverage, the baseline score is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Determine'), the resource ('French DPE energy class'), and the input ('primary energy consumption'). It uniquely identifies the tool among many 'calculate_*' siblings by specifying 'French DPE'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for calculating DPE energy class but provides no explicit guidance on when to use it versus other tools, nor any prerequisites or exclusions. It is adequate but lacks proactive direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_drain_slopeBInspect

Compute required drain slope (% and cm/m) to ensure proper water flow per plumbing code. Use for plumbing or roof drainage. Inputs: pipe length, pipe diameter, application. Returns slope % and drop in cm. See list_bundles for related 'plomberie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
fixture_typeYesType of sanitary fixture being drained
pipe_diameter_mmNoDrain pipe diameter in millimeters (default 100mm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states what the tool does, not any behavioral traits (e.g., that it is read-only, has no side effects, or assumptions about input ranges). The description adds no behavioral context beyond the purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence that directly conveys the tool's purpose with no extraneous words. Efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description should indicate what the tool returns (e.g., slope in degrees or percentage) and possibly reference the specific DTU norm. Without this, the agent cannot fully understand the return value. The description is incomplete for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so each parameter already has a textual description. The tool description adds no extra meaning or relationships between fixture_type and pipe_diameter_mm. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool calculates minimum drain pipe slope according to French DTU norms. The verb 'calculate' and resource 'drain pipe slope' are specific, and the French norms distinction differentiates it from generic 'calculate_slope' and other pipe calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For instance, it doesn't mention when to prefer this over 'calculate_pipe_diameter' or 'calculate_slope', nor does it list prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_dress_alterationsCInspect

Estimate dress alteration cost and time by alteration type. Use for tailoring or wedding-dress budgeting. Inputs: alterations needed (hem, sides, sleeves), garment type. Returns total cost estimate and time hours. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
target_sizeYesTarget FR dress size
measurement_bustYesActual bust measurement cm
measurement_hipsYesActual hip measurement cm
measurement_waistYesActual waist measurement cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavioral traits, but it only states the core action. Key details are missing: what computation occurs (e.g., difference between measurements and target?), what format is returned, and whether negative results are meaningful. The description does not contradict annotations (none exist), but it is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is brief but excessively minimal for a tool with four required parameters and no output schema. It sacrifices informativeness for brevity, leaving the agent underinformed about how to use the tool correctly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description fails to provide sufficient context for correct usage. It does not explain what 'alteration adjustments' means in practice, how to interpret results, or handle edge cases, making it incomplete for reliable agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% description coverage for all four parameters, so the description adds no new semantic value. Baseline scores apply: the schema already clarifies parameter meaning (e.g., actual bust measurement in cm), and the description does not detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('calculate') and resource ('alteration adjustments needed for a dress'), making the tool's purpose understandable. However, it lacks precision about what the output represents (e.g., dimensional changes), which would help distinguish it from other clothing sizing tools like calculate_bra_size or calculate_clothing_size_convert.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, limitations, or typical use cases, leaving the agent to infer context from the tool name and schema alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_due_dateCInspect

Calculate estimated due date using Naegele's rule and return trimester milestone dates

ParametersJSON Schema
NameRequiredDescriptionDefault
last_period_dateYesYYYY-MM-DD — First day of last menstrual period

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavioral traits. It states the algorithm and output, but does not mention side-effect status, error handling, or assumptions about input validity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 12 words, no fluff, front-loaded with key purpose. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description should clarify return value format. It ambiguously says 'return trimester milestone dates' without confirming the due date itself is returned. Lacks explanation of output structure or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the only parameter. Tool description adds no extra meaning beyond schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies verb 'calculate', resource 'estimated due date', method 'Naegele's rule', and mentions additional output (trimester milestone dates). It is clear and specific, but does not explicitly differentiate from sibling tool 'calculate_pregnancy_due_date'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other pregnancy calculators). No prerequisites or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_earthquake_energyBInspect

Calculate energy released by an earthquake from its magnitude. Returns: {energy_joules, tnt_equivalent_kg}. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
magnitudeYesRichter/moment magnitude

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool calculates energy but does not disclose the formula, units of output (joules, ergs), or any assumptions (e.g., Gutenberg-Richter). This lack of detail leaves ambiguity about the calculation's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Perfectly concise for the simple function it describes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a one-parameter calculation tool, the description covers the basic purpose but lacks details on output format or units. Given no output schema, the description should clarify what the agent can expect, which it does not. However, the tool is simple enough that this may suffice.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (parameter 'magnitude' has description 'Richter/moment magnitude'). The description adds 'from its magnitude', which is redundant. Baseline is 3 since schema fully documents the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: calculate energy from earthquake magnitude. It uses a specific verb and resource. However, it does not explicitly differentiate from sibling tools beyond the name, which is adequate given the unique context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not mention any prerequisites, constraints (e.g., valid magnitude range beyond schema min/max), or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ects_creditsCInspect

Estimate ECTS credit workload (1 ECTS ≈ 25-30 study hours). Use for university course planning across Europe. Inputs: course hours, credits target. Returns expected workload and balance. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
weeksYesNumber of weeks
hours_per_weekYesStudy hours per week
hours_per_creditNoHours per ECTS credit (standard: 25-30)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must fully disclose behavior. It only states 'Estimate ECTS credit workload' without explaining the output format, edge cases, or that it uses the standard formula hours_per_week * weeks / hours_per_credit. This is insufficient for an agent to anticipate tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that conveys the core purpose without extraneous words. It is front-loaded and easy to parse, though it could be slightly expanded without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple calculation and complete parameter schema, the description is minimally adequate. However, it lacks context about how the result is computed (e.g., credits = hours / 27.5), potential ranges, or validation warnings, leaving the agent with incomplete guidance for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all three parameters, so the schema already documents their meaning. The description adds no additional semantic context beyond the schema, meeting baseline expectation but not enhancing understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description 'Estimate ECTS credit workload' clearly states the verb (estimate) and resource (ECTS credit workload). However, with over 100 sibling calculate_* tools, it does not differentiate from similar educational or credit-related calculators, lacking specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not mention typical use cases, prerequisites, or suggest other tools for similar calculations, leaving the agent without decision context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_electrical_powerCInspect

Compute electrical power for single or three-phase circuits. Use for electrical engineering. Inputs: voltage, current, phase, power factor. Returns power kW and apparent power kVA. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
phaseNoPhasemono
cos_phiNoPower factor
currentYesAmps
voltageYesVolts

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description does not disclose side effects, permissions, or input constraints beyond schema. For a calculator tool, it's likely safe, but no explicit confirmation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise (5 words) but lacks substance. While not verbose, it omits important details like output format or formula used, making it too sparse to be fully helpful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given four parameters and no output schema, the description should explain what the tool returns (e.g., power in watts) and any assumptions. It does not, leaving the agent to infer from the name and schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all parameters with descriptions (100% coverage). Description adds no additional meaning; the phrase 'mono/tri-phase' hints at phase usage but is already in schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Electrical power mono/tri-phase' clearly states the tool computes electrical power for single-phase or three-phase systems. It uses specific domain terms and distinguishes from generic power calculators, though many sibling tools exist.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternative tools (e.g., calculate_electricity_cost). Lacks prerequisites, limitations, or scenarios where this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_electricity_costCInspect

Compute electricity cost for an appliance. Use for energy budgeting. Inputs: power W, daily usage hours, kWh price. Returns daily/monthly/yearly cost. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoDays
power_wYesWatts
hours_dayYesHours/day
price_kwhNoEUR/kWh

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the basic function. It does not disclose that default values (30 days, 0.2516 EUR/kWh) are used or what the output format is.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It is appropriately concise and front-loaded, though it could benefit from a bit more detail without losing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the minimal description is adequate but lacks completeness. It doesn't explain the default values or the nature of the output, which is important for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the input schema already thoroughly describes parameters. The description adds no additional meaning beyond what is in the schema, earning a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates electricity cost for an appliance, which is a specific verb+resource. It distinguishes its purpose from other cost-related tools, though it doesn't explicitly differentiate siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like calculate_electricity_cost_appliance or other cost calculators. The description lacks any context for usage decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_electricity_cost_applianceBInspect

Compute annual electricity cost of a household appliance. Use for energy audit and replacement decisions. Inputs: power W, daily hours, kWh price. Returns annual cost and CO₂ kg. See list_bundles for related 'energie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
power_wYesPower in watts
hours_dayYesHours used per day
price_kwhNoEUR per kWh

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It only states the high-level output but does not mention assumptions (e.g., constant usage, 365 days), limitations (e.g., no seasonal variation), or what happens with zero inputs. The formula or calculation method is not described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It is well front-loaded and efficiently conveys the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameters and no output schema, the description covers the main purpose and implicitly connects inputs to output. However, it lacks usage guidance and behavioral transparency, which slightly reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good parameter descriptions. The description adds minimal value beyond the schema by implying the result is annual cost, which ties the parameters together. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Annual electricity cost of an appliance' clearly states the tool's purpose (calculate annual cost for an appliance) and implicitly distinguishes from the sibling 'calculate_electricity_cost' by the word 'appliance'. However, it does not explicitly clarify the difference, so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not mention when to use this tool versus alternatives like 'calculate_electricity_cost', nor does it specify prerequisites or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ellipseCInspect

Compute ellipse area A=π·a·b and approximate perimeter (Ramanujan). Use for elliptical fields, tracks, or design. Inputs: semi-major a, semi-minor b. Returns area and perimeter. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesSemi-major axis
bYesSemi-minor axis

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It does not disclose behavioral traits like return format, units, or whether both area and circumference are returned separately. Minimal disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a short phrase, which is concise but lacks sentence structure. It is not a full sentence and could be more explicit. Acceptable for a simple tool but not exemplary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with complete schema coverage and no output schema, the description is adequate but incomplete. It does not specify whether both area and circumference are returned, or any other details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for both parameters, both described as 'Semi-major axis' and 'Semi-minor axis.' The description adds no new meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies 'Ellipse area and circumference,' clearly indicating the tool's output and resource. It distinguishes from many sibling tools that calculate other geometric shapes, but it could explicitly mention 'calculate' to match the verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, such as other geometric calculators (e.g., calculate_cone, calculate_cylinder). No prerequisites or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_emergency_fundCInspect

Compute recommended emergency fund target (3-6 months of expenses). Use for personal financial planning. Inputs: monthly expenses, dependents count, income stability. Returns recommended fund and savings timeline. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
dependentsYesNumber of dependents
job_stabilityYesJob stability level
monthly_expensesYesMonthly expenses EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits like calculation assumptions or output format, but it only states the purpose, leaving expectations unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words, but it could benefit from slight expansion to improve clarity without losing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain the return value or criteria for the recommendation. It fails to provide this context, leaving the tool incomplete for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all parameters with 100% coverage. The description adds no additional meaning beyond what the schema provides, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and the resource 'emergency fund target', making the purpose obvious. However, it does not explicitly distinguish from sibling tools, though no sibling appears similar.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, or any prerequisites or context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_employer_cost_frCInspect

Compute total employer cost in France (gross + social charges). Use for hiring budget or freelance vs salary comparison. Inputs: gross salary, status (cadre/non-cadre). Returns employer cost and total charges. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gross_monthlyYesMonthly gross salary EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description does not disclose behavioral traits beyond the core function. It does not mention whether the tool is a safe read operation, what dependencies exist, or what the output format will be. For a calculator, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (4 words) but lacks structure. While brevity is good, it omits necessary context such as a sentence or bullet points explaining the tool's purpose and behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, no output schema) and many siblings, the description is incomplete. It should clarify what 'total employer cost' includes (e.g., social charges, taxes) and what the output represents. Without this, an AI agent may misuse it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear param description ('Monthly gross salary EUR'). The tool description adds no additional meaning to the parameter, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Total employer cost France' states the tool's purpose: calculating employer cost in France. However, it is vague and does not specify the exact scope (e.g., including social charges, health insurance, etc.) to distinguish it from related tools like calculate_french_salary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Given many sibling tools for salary and cost calculations in different countries, explicit differentiation (e.g., 'For employee net salary, use calculate_french_salary') is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_energy_physicsBInspect

Calculate kinetic (½mv²), potential (mgh), mass-energy (E=mc²), or work (F·d). Returns: {energy_joules, energy_kj, energy_kwh}. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEnergy type
force_nNoForce N (work)
mass_kgNoMass in kg
height_mNoHeight m (potential)
distance_mNoDistance m (work)
velocity_msNoVelocity m/s (kinetic)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description must disclose behavior. It only lists what is calculated without mentioning side effects, output format, or whether the tool is read-only. Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise (one sentence listing formulas) and front-loaded. It could be slightly more structured but is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a multi-type calculation tool, the description is lacking details on how the type parameter is used, error handling, or return value structure. However, given the input schema covers parameters, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. The description adds value by linking formulas to parameter usage (e.g., kinetic uses mass and velocity), clarifying which parameters apply to each energy type.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates four specific energy types with their formulas. It distinguishes from siblings like 'calculate_kinetic_energy' by covering multiple types in one tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus the separate single-type calculators (e.g., 'calculate_kinetic_energy'). The description does not indicate context or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_equationBInspect

Solve 1st degree (ax+b=0) or 2nd degree (ax²+bx+c=0) equations. Returns: {error}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesCoefficient a
bYesCoefficient b
cNoCoefficient c (for degree 2)
degreeYesEquation degree

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It only states the types of equations solved, with no disclosure about handling edge cases (e.g., degenerate equations, division by zero, complex roots) or what output the tool returns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 14-word sentence that efficiently conveys the tool's purpose with no wasted words. It is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple equation solver, the description covers the core functionality. However, it lacks behavioral transparency for edge cases and output format, which an agent would need for robust invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters in the schema have descriptions, providing 100% coverage. The description adds value by showing the mathematical context (ax+b=0, ax²+bx+c=0), clarifying the role of each coefficient beyond the schema's brief label.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool solves 1st and 2nd degree equations and gives their standard forms. It distinguishes from many sibling tools that are specialized for other calculations, though it does not differentiate from the similar 'calculate_quadratic_equation' sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives, no mention of when not to use it (e.g., higher degree equations), and no elaboration on expected input scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ev_charging_costCInspect

Compute electric vehicle charging cost and time to full. Use for EV trip planning. Inputs: battery kWh, current %, target %, charger kW, kWh price. Returns cost and hours. See list_bundles for related 'energie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
price_kwhNoPrice per kWh
target_pctNoTarget charge %
battery_kwhYesBattery capacity kWh
current_pctYesCurrent charge %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the full burden falls on the description, which fails to disclose any behavioral traits. There is no mention of output format, side effects, assumptions, or whether the tool returns cost and time values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief but not effectively concise. It lacks crucial details, making it inadequate rather than efficiently informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 100% schema coverage, the description omits what the tool returns (cost and time) and any assumptions. For a tool with 4 parameters and no output schema, more context is needed to fully understand its behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing basic parameter meanings. The description adds no additional semantics beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Electric vehicle charging cost and time' vaguely indicates the tool's domain but lacks a specific verb or action (e.g., 'calculates', 'estimates'). It helps distinguish from other calculator tools but could be clearer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidelines are provided. The description does not specify when to use this tool over sibling calculators, nor does it mention prerequisites or contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_excavationBInspect

Compute excavation volume (m³) and truck loads needed for a foundation, pool, or trench. Use for construction. Inputs: length, width, depth (m), bulking factor. Returns m³ to remove and 8m³-truck count. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
depth_mYesDepth in meters
width_mYesWidth in meters
length_mYesLength in meters
soil_typeNoSoil type (swell: normal=1.25, rocky=1.50, clay=1.30)normal

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It mentions 'swelled volume' but does not explain the calculation steps, return format, or how swell factors are applied. The enum descriptions in the schema help partially, but the description lacks explicit behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, short sentence that is front-loaded and contains no extraneous information. Every word is purposeful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and 4 parameters (including an enum with special values), the description is incomplete. It does not state what the tool returns (e.g., both excavation and swelled volumes, units, or any caveats). More context is needed for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters having descriptions, including swell factors in the soil_type enum. The description adds no additional parameter info beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates excavation and swelled volume for earthwork, which indicates a specific verb and resource. It distinguishes from sibling volume calculators by mentioning 'swelled volume', though it could be more explicit about the exact volume calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description is only one sentence without any context on use cases, prerequisites, or exclusions. The agent must infer from the name and schema alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_exchange_marginCInspect

Detect the hidden margin charged by a money exchanger over the mid-market rate. Use to compare currency exchange offers. Inputs: offered rate, mid-market rate. Returns margin %. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bank_rateYesBank/bureau rate
market_rateYesMid-market rate

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description does not disclose any behavioral traits such as what the tool returns, whether it is read-only, or any side effects. For a calculation tool, the output format is critical but missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short, which is concise but at the cost of completeness. It lacks structure and front-loads critical information poorly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, annotations, and the vagueness of the description, the tool is incomplete. A user or agent cannot determine what the result represents (e.g., absolute margin, percentage). More context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides descriptions for both parameters ('Bank/bureau rate' and 'Mid-market rate'), achieving 100% coverage. The tool description adds no further meaning, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Hidden exchange rate margin' is vague. It suggests the tool calculates a margin, but does not specify the verb (e.g., 'calculate') or the exact resource (difference/spread). It is slightly better than a tautology but lacks clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided. The description does not inform when to use this tool over alternatives like calculate_currency_cross_rate or calculate_exchange_rate_margin. Sibling tools exist but no differentiation is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_exchange_rate_marginCInspect

Calculate the margin charged on a currency exchange. Returns: {cost_per_1000_eur, rating}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bank_rateYesRate offered by bank/exchange
mid_market_rateYesMid-market (real) exchange rate

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It fails to mention that the tool is a pure calculation with no side effects, what the output represents (e.g., margin amount or percentage), or any limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence that directly states the tool's purpose with no unnecessary words. It is highly concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema or behavioral details, the description leaves out important context such as the return format of the margin (e.g., absolute value or percentage). This incompleteness hinders an agent's ability to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with each parameter having a clear description. The tool description adds no additional meaning beyond the schema, meeting the baseline for well-documented parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates 'the margin charged on a currency exchange', providing a specific verb and resource. However, it does not distinguish from the similar sibling 'calculate_exchange_margin', which could cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'calculate_exchange_margin' or prerequisites for the input rates. It is a single sentence without any usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_expected_value_betBInspect

Calculate expected value and profitability of a bet or investment decision. Returns: {lose_probability}. See list_bundles for related 'jeux-probabilites' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bet_costNoUpfront cost to enter the bet (default 0)
win_amountYesNet amount won if outcome is positive
loss_amountYesNet amount lost if outcome is negative
win_probabilityYesProbability of winning (0 to 1)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavioral traits such as whether it is a read-only calculation, side effects, or any assumptions. The agent is left uninformed about the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and front-loaded with the tool's purpose. It contains no unnecessary words, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, and the description does not explain what the tool returns (e.g., a number, an object). For a calculation tool, this lack of output specification leaves important context missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and each parameter already has a description. The tool description adds no additional meaning beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates expected value and profitability of a bet or investment decision, which is specific and distinguishes it from other calculate tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives or when not to use it. Among many sibling calculate tools, there is no mention of context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_exposure_triangleBInspect

Calculate the missing exposure value (aperture, shutter speed or ISO) given the other two. Returns: {ev_value, lv_value, shutter_speed_s}. See list_bundles for related 'photographie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
isoYesISO sensitivity value
apertureYesAperture f-number
shutter_speedYesShutter speed in seconds (e.g. 0.004 for 1/250s)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full responsibility for behavioral transparency. It only states the calculation purpose but does not disclose what the output is, whether the tool modifies data, or any side effects. The description lacks details on return format or behavior when inputs are provided (e.g., which parameter is assumed missing).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with zero waste. It uses clear, domain-specific terminology and front-loads the purpose immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is minimal and does not explain how to specify which parameter is missing or what the tool returns. Given the schema requires all three parameters, the description's 'missing' concept may confuse agents. No output schema exists, so the description should ideally clarify the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with individual parameter descriptions. The description adds semantic value by explaining the relationship between parameters—that two are given and the third is calculated. This compensates for the schema's lack of contextual grouping.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates the missing exposure value (aperture, shutter speed, or ISO) given the other two. It uses specific photography terms and distinguishes this tool from other 'calculate' tools by specifying the exposure triangle context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention any prerequisites, limitations, or scenarios where this tool is appropriate. Among many calculate_* siblings, no comparative context is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fabric_neededCInspect

Compute fabric meters needed for a garment by pattern. Use for sewing. Inputs: garment type, size, fabric width. Returns meters of fabric. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeYesGarment size
garment_typeYesGarment type
fabric_width_cmYesFabric roll width cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full weight for behavioral traits. It only states the basic purpose, without disclosing assumptions, return format, or any operational details. For a calculation tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no wasted words. While efficient, it could be slightly more structured by elaborating on the calculation scope or output, but it meets the standard for brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should explain the return value. It does not state that it returns a number in meters. Additionally, it lacks context about assumptions or typical use cases, making it incomplete for a complex domain with many sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents parameter meanings. The description adds no additional semantics beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates fabric meters for a garment. It uses a specific verb and resource. However, it does not differentiate from similar sibling tools like calculate_fabric_yardage or calculate_curtain_fabric, limiting its clarity in context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool over alternatives. There are many similar fabric calculators among the siblings, such as calculate_fabric_yardage and calculate_curtain_fabric, but the description offers no help in distinguishing them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fabric_yardageCInspect

Calculate fabric needed for a garment in meters (includes 10% for pattern matching). Returns: {meters_needed, note}. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeYes
garmentYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description only hints at behavioral details (unit in meters, pattern matching allowance). It does not disclose other important aspects such as assumptions (e.g., fabric width, seam allowance) or if the operation is deterministic.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no extraneous information. It is perfectly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two enum parameters and no output schema, the description gives basic context (unit and pattern allowance). However, it lacks detail on assumptions (e.g., standard fabric width) and does not specify the return format, leaving some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no meaning beyond the input schema's enum values. With 0% schema description coverage, the description should explain how garment and size affect the calculation, but it does not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates fabric needed for a garment in meters, with a 10% pattern matching allowance. However, it does not distinguish itself from the sibling tool 'calculate_fabric_needed', which likely has a similar purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'calculate_fabric_needed' or other fabric-related calculators. The description lacks usage context or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_factorial_permutationBInspect

Calculate factorial, permutations P(n,r), and combinations C(n,r). Returns: {factorial}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
nYesn value
rNor value for P(n,r) and C(n,r)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavioral traits such as constraints (e.g., r must be ≤ n), error handling, or return format. For a math tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently lists the three operations. It is front-loaded and concise, though could benefit from brief usage hints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and no description of return format or behaviour when r is omitted. The description is incomplete given the tool's complexity and lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for parameters n and r. The description adds no additional meaning beyond the schema, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates factorial, permutations (P(n,r)), and combinations (C(n,r)) with specific verbs and resources, distinguishing it from other calculator tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like calculate_dice_probability or calculate_binomial_probability. Usage is implied for combinatorial calculations but lacks exclusions or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fertilizer_npkBInspect

Calculate NPK fertilizer quantities needed based on crop type and soil type. Returns: {total_kg}. See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
crop_typeYesType of crop to fertilize
soil_typeYesType of soil
surface_m2YesSurface area in square meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description lacks details about output format, units, or behavioral traits (e.g., returns amounts in grams or kg per m²). For a calculation tool, more transparency is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single sentence with no fluff, but it could be more informative by including surface area as a factor. Still concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema (3 params) and no output schema, the description should explain what is returned (e.g., N:P:K amounts). It does not, leaving a gap. Still adequate for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already describes parameters well with enums and min. Description adds no additional meaning beyond restating parameter names, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates NPK fertilizer quantities based on crop type and soil type. However, it omits mentioning the required surface area input, which is a key parameter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use (when needing NPK quantities for a given crop and soil), but provides no exclusions or alternatives to sibling tools like calculate_soil_ph_amendment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fish_tank_heaterBInspect

Compute aquarium heater wattage needed by tank volume and target temp. Use for aquarium setup. Inputs: tank L, current temp, target temp. Returns wattage W. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_temp_cYesRoom temperature °C
target_temp_cYesTarget water temperature °C
volume_litersYesTank volume liters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It only states the purpose without disclosing the calculation method, assumptions, or output behavior. An agent cannot infer potential limitations or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with no wasted words. Efficient but very minimal; slightly more informative context would be beneficial without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the tool being a simple calculation with only 3 parameters and no output schema, the description lacks any mention of output format, assumptions, or common usage scenarios. This leaves the agent with insufficient context to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so each parameter already has a clear description in the schema. The tool description adds no additional meaning beyond what is already there, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and the specific resource 'aquarium heater wattage needed'. It distinguishes itself from siblings like 'calculate_aquarium_volume' which targets a different metric.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. There is no mention of prerequisites, assumptions, or constraints, such as typical tank conditions or heater sizing standards.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_flight_distanceAInspect

Calculate great-circle distance between two coordinates. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
lat1YesDeparture latitude
lat2YesArrival latitude
lon1YesDeparture longitude
lon2YesArrival longitude

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears the full burden of behavioral disclosure. It states the core operation (great-circle distance) but omits important details such as the formula used (e.g., Haversine), expected units for input/output, and whether the result is in kilometers, miles, or nautical miles. The description is adequate for a simple tool but lacks transparency on technical assumptions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the verb and key concept. Every word is functional, with no redundancy or filler. It efficiently communicates the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (4 parameters, no output schema), the description provides the essential purpose but fails to mention the return value or units. While the input schema covers the parameters, the agent has no information about what the tool returns (e.g., a number in kilometers). This gap reduces completeness for a tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with each parameter described (e.g., 'Departure latitude'). The description adds no additional semantic information beyond what the schema already provides. According to guidelines, with high schema coverage, a baseline of 3 is appropriate, as the description does not enhance understanding of the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the function: 'Calculate great-circle distance between two coordinates'. The verb 'calculate' paired with the specific resource 'great-circle distance' and parameter 'two coordinates' leaves no ambiguity about what the tool does. It distinguishes itself from sibling tools like calculate_distance_2d (Euclidean) and calculate_distance_3d (3D distance) by specifying 'great-circle', which implies spherical Earth distance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. For instance, it does not mention that it is appropriate for geographic coordinates on Earth, or that it assumes a spherical model. There are no explicit exclusions or comparisons to sibling tools. An agent would have no context to decide between this and other distance calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_floor_areaBInspect

Calculate total floor area and Carrez habitable area from a list of rooms. Returns: {rooms, total_area_m2, carrez_area_m2, note}. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
roomsYesRooms with length and width in meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description does not disclose behavioral traits beyond the calculation. It is safe to assume the tool is read-only (computation), but no explicit statement about side effects, permissions, or invariants. Adequate for a simple calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Clearly communicates the essential purpose. Ideal conciseness for a straightforward tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description implies the tool returns total floor area and Carrez habitable area, which is minimal. Missing details like units (likely square meters) or return format. Adequate for a simple calculation but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the 'rooms' parameter ('Rooms with length and width in meters'). The description adds no further parameter-level information beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates total floor area and Carrez habitable area from a list of rooms. Verb 'calculate' plus specific resource 'floor area and Carrez habitable area' makes purpose clear. However, it does not explicitly differentiate from sibling tools like 'calculate_area' or 'calculate_surface_carrez', leaving some ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No mention of prerequisites, context, or exclusions. The description simply states what it does, leaving the agent to infer appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_flow_rate_convertAInspect

Convert flow rate between L/s, L/min, L/h, m³/h, gpm, cfm. Use for plumbing, HVAC, or industrial design. Inputs: value, from-unit, to-unit. Returns converted flow rate. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesFlow rate value
to_unitYesTarget unit
from_unitYesSource unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist. The description only states the conversion purpose without detailing any behavioral aspects such as rounding, precision, or side effects. With no annotations, the description falls short on transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and front-loaded, containing no unnecessary words. It effectively communicates the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of a unit conversion tool, the description is mostly complete. It lacks explicit mention of the return value (e.g., converted number), but that is implied by 'convert'. For a tool with no output schema, slightly more detail could help.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions and enums for each parameter. The description adds no additional semantics beyond listing the units, which is already in the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Convert flow rate' with a specific list of units (L/min, L/h, m³/h, GPM, CFM), distinguishing it from other conversion and flow calculation tools among its siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus other conversion tools or flow rate calculators. The usage is implied by the name and description, but it lacks directives like 'use this for unit conversion, not for pipe flow calculations'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_food_cost_per_servingCInspect

Compute food cost per serving from ingredient costs. Use for restaurants, meal-prep services. Inputs: list of ingredients with cost and quantity used, servings. Returns cost per serving and total. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
servingsYes
ingredientsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states the basic purpose without disclosing how costs are computed (e.g., sum of (price/total_quantity)*used_quantity), any assumptions, or return format. The minimal disclosure is insufficient for safe tool invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence of 11 words. While it efficiently states the purpose, it lacks structural elements like bullet points or additional clarification that could aid readability for a tool with complex nested parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (nested array, no output schema, no annotations), the description is woefully incomplete. It omits calculation logic, unit expectations, and output format, making it hard for an agent to use correctly without external knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain any parameter beyond vague references to 'ingredient prices and quantities'. It fails to clarify key fields like price, total_quantity, used_quantity, or the servings integer, leaving the agent to guess their meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates recipe cost per serving from ingredient prices and quantities. It uses specific verb 'calculate' and resource 'recipe cost per serving', which distinguishes it from sibling calculator tools like calculate_recipe_nutrition or calculate_recipe_scaling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool or alternatives. It does not mention prerequisites, context, or when not to use, leaving the agent without decision support compared to similar calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_forceDInspect

Compute force using Newton's second law F=m·a. Use for physics problems. Inputs: mass kg, acceleration m/s². Returns force in newtons. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
force_nNoNewtons
mass_kgNoMass kg
accelerationNom/s²

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description fails to disclose behavioral traits such as which variable is computed, whether it requires two inputs, or how it handles missing values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly terse—one phrase with no sentences. It is concise but lacks the necessary structure to be informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description must explain what the tool returns. It does not mention return values or behavior for partial inputs. The tool's flexibility (all optional params) is not addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover 100% of parameters, providing units (Newtons, kg, m/s²). The tool description adds no extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Newton's 2nd law: F=ma' but does not explicitly state what the tool calculates (force, mass, or acceleration). It is ambiguous and borders on tautology, lacking a clear verb+resource statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the many sibling physics calculators. There is no context on prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fractionBInspect

Perform fraction operations: add, subtract, multiply, divide, simplify. Returns: {input, result, decimal}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
den1YesDenominator 1
den2NoDenominator 2
num1YesNumerator 1
num2NoNumerator 2
operationYesOperation

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavior such as return format or side effects. It merely restates the operations, offering no insights beyond the obvious. For a calculation tool, minimal behavioral context is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the core functionality efficiently without superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature of the tool and high schema coverage, the description is somewhat complete but lacks explanation of return values (e.g., simplified fraction or decimal). It does not mention constraints like denominator positivity, though schema handles that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptive parameter names and descriptions. The description adds value by listing the operations in natural language, which reinforces the enum values, but adds little beyond what the schema already conveys.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs fraction operations (add, subtract, multiply, divide, simplify), which specifies the verb and resource. However, the presence of a sibling tool 'calculate_fraction_operations' suggests overlap without differentiation, slightly reducing clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'calculate_fraction_operations' or 'calculate_ratio_simplify'. There are no prerequisites or context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fraction_operationsCInspect

Add, subtract, multiply, or divide two fractions and return the simplified result. Use for math homework. Inputs: num1/den1, op, num2/den2. Returns result fraction (lowest terms). See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
den1YesDenominator of first fraction
den2YesDenominator of second fraction
num1YesNumerator of first fraction
num2YesNumerator of second fraction
operationYesOperation to perform

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It fails to disclose any behavioral traits such as input validation, error handling, or default behavior (e.g., does it simplify results?). Only states the function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no filler. It is appropriately concise, though it could be slightly more informative without adding length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 required parameters and no output schema, the description does not explain return values or edge cases. It is adequate for a simple arithmetic tool but leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no extra meaning beyond what the schema already provides for each parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs arithmetic operations on two fractions, which matches the name and schema. However, it does not differentiate from sibling tool 'calculate_fraction', which could also handle fractions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, nor any conditions or prerequisites. The description simply states what it does, with no usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_freezer_durationCInspect

Return maximum recommended freezer storage duration for a food type. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
food_typeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It mentions 'maximum recommended' but fails to explain the source of recommendations, whether they are based on safety or quality, or any assumptions. This lack of detail could lead to misuse.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded and contains no unnecessary words. Every word serves a purpose, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and no annotations, so the description must fully inform the agent. It omits crucial details like the unit of duration (days, months?) and whether the duration is a single value or range. For a simple lookup, this gap reduces usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal value beyond the input schema: it paraphrases 'food_type'. However, the schema fully defines the parameter with an enum of 7 clear values. The description does not explain what happens if a food type is not listed, but the enum prevents that. The baseline of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'return' and the resource 'maximum recommended freezer storage duration' for a specific 'food type'. It accurately describes the tool's function. However, it does not differentiate this tool from sibling tools like 'calculate_freezer_thaw_time' or other food-related calculators, which may cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. The description does not specify any prerequisites, exclusions, or context for selecting this tool over others. The agent receives no help in deciding when to invoke this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_freezer_thaw_timeBInspect

Estimate thawing time for frozen food by weight and method. Returns: {safe_temp_check, tip}. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
methodYesThawing method
weight_kgYesFood weight kg

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full responsibility for disclosing behavioral traits. It only states the purpose without detailing assumptions, accuracy, output format, or limitations. The agent gains no insight into how the estimation works or what to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence starting with a verb. It contains no unnecessary words or repetition, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two parameters and no output schema, the description lacks critical context such as the unit of the estimated time (minutes, hours, etc.) and whether the estimate is approximate or precise. This omission could lead to incorrect interpretation of the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the parameters are fully documented in the schema. The description simply paraphrases the parameters ('by weight and method') without adding extra meaning or details such as allowed values or units.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates thawing time for frozen food based on weight and method. It uses specific verb 'estimate' and resource 'frozen food', distinguishing it from other calculation tools like 'calculate_cooking_time' or 'calculate_meat_cooking'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'calculate_cooking_time' or 'calculate_meat_cooking_time'. There is no mention of specific contexts or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_french_income_taxAInspect

Calculate French income tax (IR) for 2026 using progressive brackets per Article 197 CGI with family quotient system. Returns: {income, family_quotient, total_tax, effective_rate_pct, marginal_rate_pct, brackets}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
partsNoNumber of fiscal shares (1=single, 2=married, +0.5 per child)
incomeYesAnnual net taxable income in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides meaningful behavioral context by specifying the calculation method (progressive brackets, family quotient system) and legal basis (Article 197 CGI). This helps the agent understand the tool's logic beyond basic inputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the main action and resource, no redundant words. Highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given two parameters, no output schema, and no annotations, the description adequately covers what the tool does and how it works. It mentions the legal reference and family quotient system, offering sufficient context for an agent to use it appropriately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have full schema descriptions (100% coverage), so the schema already explains 'income' and 'parts'. The description adds the legal context but does not significantly enhance parameter semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates French income tax for 2026 using progressive brackets per a specific legal article (Article 197 CGI) and the family quotient system. The verb 'Calculate' and resource 'French income tax' are specific, and the tool is easily distinguished from numerous sibling tax calculators for other countries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives or provide prerequisites. It only implies usage for French income tax calculation. No 'when not to use' or references to sibling tools are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_french_salaryAInspect

Convert French gross salary to net salary for 2026 (cadre, non-cadre, or civil servant). Returns monthly/annual net, social contributions, employer cost. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoEmployment statuscadre
gross_monthlyYesGross monthly salary in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the tool is a conversion calculation, non-destructive, and year-specific (2026). However, it does not detail any behavioral traits such as response format latency, or assumptions about the calculation model (e.g., net after social charges only).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, information-dense sentence. It front-loads the core action and immediately provides scope (year, statuses) and outputs, with no redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no output schema, the description covers the main outputs but lacks detail on the return structure (single value vs object) and does not clarify whether 'net salary' includes income tax. This ambiguity leaves room for misuse.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by clarifying that the result includes monthly and annual net amounts, social contributions, and employer cost, which helps the agent understand the output context for the input parameters (gross_monthly and status).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: converting French gross salary to net salary for 2026, specifying three employment statuses (cadre, non-cadre, civil servant). It lists the outputs (monthly/annual net, social contributions, employer cost), distinguishing it from sibling calculators for other countries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for French salary conversion but provides no explicit guidance on when to use this tool versus alternatives (e.g., other country calculators or the sibling 'calculate_employer_cost_fr'). No exclusion criteria or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_french_vatAInspect

Calculate French VAT (TVA) — convert between HT (before tax) and TTC (after tax). Supports all 4 French VAT rates. Returns: {amount_ht, amount_ttc, vat_amount, vat_rate}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoInput mode: ht=before tax, ttc=after taxht
rateNoVAT rate percentage20
amountYesAmount in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description sufficiently conveys that the tool performs conversion calculations without side effects. It doesn't hide behavioral traits, though it could explicitly state it is read-only or non-destructive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise—one sentence front-loading the core purpose. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, and the description fails to explain the return format (e.g., returns both HT and TTC amounts). For a calculator with 3 parameters and enums, this missing information hinders agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds no new meaning beyond what is already in the schema. The terms HT and TTC are already explained in schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool's purpose: calculating French VAT (TVA) with conversion between HT and TTC, and explicitly states support for all 4 French VAT rates. This distinguishes it from siblings like calculate_belgian_vat or calculate_vat_generic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly sets context by naming 'French VAT', making it obvious when to use this tool (for French VAT calculations). However, it lacks explicit guidance on when not to use it or mention of alternatives, e.g., for non-French VAT scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_frequency_noteBInspect

Calculate the frequency of a musical note based on equal temperament tuning. See list_bundles for related 'musique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteYesNote name in chromatic scale
octaveYesOctave number (A4 = concert pitch reference)
tuning_referenceNoTuning reference frequency in Hz (default A4=440Hz)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description only says 'calculate', which implies a read-only calculation. However, it does not disclose any behavioral traits such as side effects, required permissions, or error handling. For a simple calculation, more detail on the formula or return format would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, extremely concise and front-loaded. Every word is informative, with no redundancy or unnecessary content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description is underspecified. It does not mention the return value (frequency in Hz), potential errors, or the formula used. For a calculation tool, additional context on edge cases and output format would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, with each parameter described adequately (note enum, octave range, tuning reference default). The description does not add any new information beyond what the schema provides, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'frequency of a musical note', with the specific method 'equal temperament tuning'. It distinguishes itself from sibling tools, which are other specific calculations or conversions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool, any prerequisites, or alternatives. It does not mention the context (e.g., music theory, tuning) or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fuel_consumptionAInspect

Calculate fuel consumption in L/100km and MPG from distance and fuel used. Returns: {l_100km, mpg_us, mpg_uk, co2_g_km_petrol}. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
distance_kmYesDistance in km
fuel_litersYesFuel consumed in liters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the basic behavior but does not mention output format, error handling, or any side effects. Adequate for a simple calculator but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, front-loaded with key information, no wasted words. Highly concise and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the purpose but lacks output details (e.g., returns both L/100km and MPG? in what format?). Given no output schema, more context would be helpful. Adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema's parameter descriptions, which are already clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: calculate fuel consumption in L/100km and MPG from distance and fuel used. It includes specific verb and resource, and distinguishes from other calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternativessuch as 'calculate_fuel_economy_conversion' or other fuel-related tools. Lacks context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fuel_costBInspect

Compute fuel cost for a journey. Use for trip budgeting or company expense. Inputs: distance km, consumption L/100km, fuel price €/L. Returns total cost and L used. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
fuel_priceYesPrice/liter
consumptionYesL/100km
distance_kmYesDistance km

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as whether the tool is read-only, what output it returns, or if it has side effects. For a calculator tool, the lack of return value description is a notable gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that efficiently states the tool's primary function. No extraneous words are present, and it is appropriately sized for a simple calculation tool. Every word carries meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator with three numeric inputs and no output schema, the description adequately conveys the core functionality. However, it omits the result unit (e.g., currency implied by fuel_price) and does not confirm whether the output is a single number. This leaves minor ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes all three parameters with clear names and descriptions (e.g., consumption: 'L/100km'). The description adds no additional semantic information. With 100% schema coverage, the baseline is 3, and the description does not improve on it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'calculate fuel cost for a journey', which is specific enough to indicate the tool's purpose. However, it does not differentiate from siblings like 'calculate_fuel_consumption' that might compute consumption per km instead of total cost. A more precise description could include 'total cost' or the formula components to fully distinguish.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs siblings such as 'calculate_fuel_consumption' or 'calculate_fuel_economy_conversion'. The description lacks any when-to-use, when-not-to-use, or alternative tool references, leaving the agent to guess based solely on the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_fuel_economy_conversionBInspect

Convert between fuel economy units: L/100km, mpg-US, mpg-UK, km/L. Use for car comparisons across regions. Inputs: value, from-unit, to-unit. Returns converted economy. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesFuel economy value to convert
to_unitYesTarget unit of fuel economy
from_unitYesSource unit of fuel economy

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It merely restates the units already defined in the schema, without disclosing behavior like rounding, precision, or error handling. Minimal value added beyond the structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the tool's purpose with no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool with three required parameters and no output schema, the description adequately covers the core purpose and units. It could mention exactness or rounding, but is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no additional meaning to parameters beyond what is already in the schema (e.g., enumerations).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and specifies the resource 'fuel economy' along with the allowed units. It is clear but does not differentiate from the sibling tool 'convert_fuel_consumption', which likely performs a similar conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description only states what it does, leaving the agent to infer the context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_future_valueCInspect

Compute the future value (FV) of a present sum at a given interest rate. Use for savings projections. Inputs: present value, annual rate %, years, compounding frequency. Returns FV and total interest. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rateYesAnnual rate percent
yearsYesNumber of years
present_valueYesPresent value EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden but provides no behavioral details beyond the basic calculation. It does not disclose assumptions (e.g., compounding frequency, rounding) or potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, no filler. Efficiently conveys core purpose but could include a brief note on assumptions without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacking output schema and annotations, the description does not provide enough context for the agent to understand return value or edge cases (e.g., zero rate, large years). Requires more detail for a financial calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no additional semantics beyond what the schema already provides, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Calculate future value of a present sum', specifying the verb and resource. It distinguishes from the sibling 'calculate_present_value' which is the inverse, but could be more explicit about the formula (e.g., compound vs simple interest).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_compound_interest' or 'calculate_present_value'. The description lacks any contextual cues for agent decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_garden_soilCInspect

Compute soil volume (m³) and number of bags needed for a garden bed. Use for gardening. Inputs: area, depth, bag volume. Returns m³ and bag count. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
width_mYesWidth m
depth_cmYesDepth cm
length_mYesLength m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations; description fails to disclose assumptions (rectangular shape) or output details, providing minimal behavioral insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is efficient and front-loaded, though lacks elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; does not specify return format (e.g., cubic meters, bag count), leaving gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions; description adds no extra meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it calculates soil volume and bags needed, but does not differentiate from siblings like calculate_raised_bed_soil.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives; lacks context for when-not-to-use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_garden_sunlight_hoursAInspect

Estimate effective daily sunlight hours for a garden based on latitude, month and orientation. See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthYesMonth number (1=January, 12=December)
latitudeYesLatitude in degrees (-90 to 90)
orientationYesGarden orientation / aspect

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description provides minimal behavioral context beyond the schema. It uses 'estimate' which implies approximation but does not disclose assumptions, limitations, or output format. With no annotations, the description carries the full burden but fails to add meaningful behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence of 11 words that is clear and to the point. No redundant information, every word serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is adequate for a simple estimation tool but lacks completeness. It does not specify the output unit (hours) or mention any constraints like ideal conditions or shading. With no output schema and no annotations, more context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover all three parameters (latitude, month, orientation) with 100% coverage. The description adds 'garden' context but no additional semantic details beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'estimate' and the resource 'effective daily sunlight hours' with specific context 'for a garden based on latitude, month and orientation'. It distinguishes itself from siblings like calculate_garden_soil or calculate_sun_exposure by focusing precisely on sunlight hours estimation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing sunlight hours for a garden, but lacks explicit guidance on when not to use it or alternatives. With many sibling calculate_ tools, it does not differentiate its specific use case from similar tools like calculate_sun_exposure or calculate_sunrise_sunset.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_garden_water_needsBInspect

Compute weekly water needs for a garden by area and plant type. Use for irrigation planning. Inputs: garden m², climate, plant mix. Returns L/week and watering frequency. See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
seasonYesCurrent season
plant_typeYesType of plants in the garden
surface_m2YesGarden surface area in square meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states 'calculate', implying a read-only computation, but does not disclose any behavioral traits such as permission requirements, side effects, or output characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's purpose. While concise, it could be slightly restructured to include output details without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 parameters, no output schema), the description adequately covers the core functionality and inputs. However, it omits any mention of output format or units, which would be helpful for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds context about 'weekly and monthly water needs' but does not enhance the meaning of individual parameters beyond what the schema already provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate', the resource 'weekly and monthly water needs', and the inputs 'based on plant type and season'. It distinguishes from sibling tools like calculate_garden_soil or calculate_garden_sunlight_hours, which address different garden aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Among many similar calculate_* tools, there is no mention of context, prerequisites, or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_gas_fee_ethAInspect

Calculate Ethereum transaction gas fee in ETH and USD. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gas_limitNoGas limit for the transaction (default 21000 for simple transfer)
eth_price_usdNoCurrent ETH price in USD (default 3000)
gas_price_gweiYesGas price in Gwei

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It states the calculation purpose but does not disclose behavioral traits like side effects (expected none) or any API-specific behavior. It is adequate for a read-only calculation but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads core information. Every word serves a purpose, with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 parameters, no output schema), the description covers the basic calculation intent. It does not specify the output format, but for a straightforward calculation, it is reasonably complete. Could be improved by noting the return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add additional meaning beyond the schema, which already describes each parameter well. No extra context is provided for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Calculate Ethereum transaction gas fee in ETH and USD,' specifying the verb, resource, and output currencies. It is distinct from sibling tools like 'calculate_crypto_profit_loss' as no other tool focuses on Ethereum gas fees.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidelines are provided about when to use this tool vs alternatives, prerequisites, or exclusions. While the name implies Ethereum, the description offers no explicit usage context or comparisons to other calculate tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_gcd_lcmAInspect

Calculate GCD (PGCD) and LCM (PPCM) of two integers using Euclidean algorithm. Returns: {gcd, lcm}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesFirst integer
bYesSecond integer

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden. It mentions using the Euclidean algorithm, but lacks details on input constraints, return format, or handling of edge cases (e.g., zero).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no extraneous information, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple, but without an output schema, the description should clarify the return format. It states both GCD and LCM are calculated, but omits whether it returns a single object or separate values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions ('First integer', 'Second integer'), and the description adds no further parameter meaning. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates GCD and LCM of two integers using Euclidean algorithm, which is specific and unique among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The tool's purpose is clear from the name and description, but no explicit guidance on when to use it vs alternatives is provided. However, the context of sibling tools makes its unique application obvious.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_gear_ratioCInspect

Compute gear ratio and torque/speed multiplication. Use for mechanical engineering, cycling, automotive. Inputs: driver teeth, driven teeth. Returns ratio and torque multiplier. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
driven_teethYesDriven gear teeth
driving_teethYesDriving gear teeth

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits (e.g., read-only, side effects). It does not clarify if the tool is a pure function or requires any special context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (4 words), which is concise and front-loaded. However, a bit more detail could be added without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain the return format (e.g., a single ratio, an object with both values). It mentions two outputs but does not describe how they are returned. This is insufficient for a tool with no further documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already covers both parameters with descriptions ('Driving gear teeth' and 'Driven gear teeth'). The description adds no new semantic information beyond hinting at outputs, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Gear ratio and torque multiplier' clearly indicates the tool's output, but it lacks an explicit verb like 'calculate' or 'compute'. However, it is specific to the domain and distinct from siblings due to the specialized terms.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternative calculation tools. The description does not mention prerequisites, edge cases, or alternative tools for related calculations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_glycemic_loadAInspect

Calculate glycemic load (GL) per food and total for a meal. Returns: {thresholds}. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
foodsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool calculates GL but does not disclose whether it returns per-food and total values explicitly, nor any assumptions or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that covers the essential purpose without unnecessary words. It is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with a well-defined schema, the description is largely sufficient. It could mention that the output includes per-food and total GL, but this is not critical given no output schema is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides full coverage for the 'foods' parameter and its sub-properties. The description adds no additional meaning beyond what the schema already conveys, such as the GL formula or expected units.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates glycemic load per food and total for a meal. It uses a specific verb ('calculate') and resource ('glycemic load'), and distinguishes itself from the many other calculator siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives, such as other nutrition calculators. It also doesn't mention prerequisites like needing GI values.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_gpa_frenchBInspect

Convert French school grades (out of 20) to GPA and academic mention. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
note_1YesGrade 1 out of 20
note_2YesGrade 2 out of 20
note_3NoGrade 3 out of 20 (optional)
note_4NoGrade 4 out of 20 (optional)
note_5NoGrade 5 out of 20 (optional)
coeff_1NoCoefficient for grade 1
coeff_2NoCoefficient for grade 2
coeff_3NoCoefficient for grade 3 (0 if unused)
coeff_4NoCoefficient for grade 4 (0 if unused)
coeff_5NoCoefficient for grade 5 (0 if unused)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It does not mention that the tool averages multiple grades using coefficients, nor does it explain the GPA scale or mention system. The return format is unspecified due to lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence but is under-specified for a tool with 10 parameters. It lacks necessary details about weighting and output, making it more incomplete than concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, no output schema), the description fails to explain coefficient usage, GPA mapping, or return values. It is insufficient for proper agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter description coverage, so the baseline is 3. The description adds no additional meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts French school grades (out of 20) to GPA and academic mention. The verb 'Convert' and resource 'French school grades' are specific, and it distinguishes from generic grade calculators like calculate_grade_average.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for French academic grading, providing clear context. However, it does not explicitly state when not to use or name alternative tools for non-French systems.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_grade_averageCInspect

Compute simple or weighted grade average. Use for school report cards. Inputs: grades list, optional weights/coefficients. Returns weighted average and missing-grade-needed forecast. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gradesYesArray of grades
coefficientsNoOptional array of coefficients/weights

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavior. It fails to mention that a simple average is computed when coefficients are omitted, and does not describe the return format (a single number) or error handling. The formula is implied but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no unnecessary words. It is front-loaded with the core purpose. However, it may be too brief for an agent to fully understand without additional context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has two parameters and no output schema. The description does not explain what the output represents (e.g., a numeric average) or handle edge cases like mismatched array lengths. This is insufficient for complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are already documented. The description adds the concept of 'simple or weighted' averages, but this is a minor addition beyond the schema's descriptions for 'grades' and 'coefficients'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate simple or weighted grade average' clearly states the tool's function and distinguishes it from sibling tools like 'calculate_average' (generic) and 'calculate_gpa_french' (specific grade system). However, it could be more explicit about the use case for academic grading.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus similar tools like 'calculate_average' or 'calculate_gpa_french'. The description does not mention alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_grade_neededBInspect

Calculate the grade needed on remaining exams to reach target average. Returns: {error}. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
exams_doneYesNumber of exams completed
exams_totalYesTotal number of exams
target_averageYesTarget final average
current_averageYesCurrent average out of 20

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It only states a calculation, but does not disclose assumptions (e.g., equal weighting of exams), the nature of the result (e.g., number or range), or edge cases (e.g., impossible targets). This is insufficient for a mathematical tool without output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys the essential purpose without any unnecessary words. It is appropriately front-loaded and every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there are no annotations and no output schema, the description is too brief. It does not explain what the tool returns, handle edge cases, or mention assumptions. For a simple calculator, this may be acceptable but it lacks completeness for safe and confident use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add additional meaning beyond what the schema already provides (e.g., explaining the relationship between parameters or constraints). It does not improve understanding of the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and the resource 'grade needed on remaining exams to reach target average', making the specific function obvious. It distinguishes this from sibling tools like calculate_grade_average, which likely computes current average.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies that the tool is used when a student wants to determine the required grade on remaining exams to achieve a target average, but it does not explicitly state when to use it versus alternatives like calculate_grade_average or calculate_average. No when-not or exclusions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_gravel_quantityCInspect

Compute gravel volume (m³) and weight (tonnes) for a surface and depth. Use for paths, foundations, drainage. Inputs: area, depth, gravel density. Returns volume and weight. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
width_mYesWidth m
depth_cmYesDepth cm
length_mYesLength m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as assumptions about density for weight calculation, precision, or output format. The description is too brief to convey important behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (4 words), which is acceptable for a simple tool, but it omits necessary details like output units or assumptions. It is not verbose but could be more informative without being lengthy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and lack of output schema, a short description might suffice, but it fails to explain how weight is calculated (e.g., assumed density) or what the output format is. This makes it incomplete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter, but those descriptions are minimal (just units). The tool description adds no additional meaning beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates gravel volume and weight, which distinguishes it from other calculation tools for different materials. However, it implies weight calculation without providing density input, which may be misleading.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other calculation tools for similar materials (e.g., calculate_soil, calculate_sand). The description does not specify prerequisites or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_grocery_unit_comparisonAInspect

Compare unit prices of grocery items — normalizes g→kg, mL/cL→L. Returns: {best_value, savings_vs_priciest}. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
itemsYesItems: name, price, quantity, unit (kg/g/L/mL/cl/unit)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the unit normalization behavior but does not explain what the tool returns (e.g., best value, sorted list). Since annotations are absent, more detail on return format or edge cases would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One concise sentence that communicates the core function efficiently. It could be slightly restructured to front-load the normalization aspect, but it is well-sized and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is somewhat incomplete. It does not describe what the tool outputs or how comparisons are presented, leaving some ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds the normalization context to the 'unit' parameter beyond the schema (e.g., 'normalizes g→kg, mL/cL→L'). This provides meaningful interpretation for the unit field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares unit prices and normalizes units, with specific examples of conversions (g→kg, mL/cL→L). It effectively conveys the core purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While the purpose is clear, the description does not mention when not to use it or provide context for selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_harvest_dateBInspect

Estimate harvest date for vegetables based on sowing date and region. Returns: {harvest_date, days_to_harvest}. See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionYesGrowing region: north (+10 days), south (-10 days), mediterranean (-15 days)
plant_typeYesType of vegetable
sowing_dateYesSowing date in ISO format (YYYY-MM-DD)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states 'estimate' but does not disclose methodology (e.g., average days, adjustments), what happens with invalid inputs, or any limitations. Basic but lacks behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, very concise and front-loaded. However, it sacrifices detail for brevity. Could benefit from one more sentence on return or behavior without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema or annotations. Simple tool with 3 required params. The description does not explain return format, error handling, or caveats. Adequate for a basic estimation but leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all 3 parameters with descriptions (100% coverage). The description adds no extra meaning beyond the schema. Baseline 3 as schema already provides enum values with adjustment hints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'estimate' and the resource 'harvest date' with scope 'based on sowing date and region'. It distinguishes itself among many calculation tools by specifying vegetable harvest estimation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or when not to use it. The description only states what it does, not when.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_hat_sizeAInspect

Calculate hat size in FR/EU, US/UK systems and standard S/M/L/XL from head circumference (cm). Returns: {head_circumference_cm, FR_EU, US_UK, standard_size}. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
head_circumference_cmYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states basic functionality but does not disclose rounding, accuracy, boundary conditions, or whether it returns all systems. For a simple calculation tool, minimal behavioral context is given.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no unnecessary words. It efficiently conveys purpose, input, and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-parameter calculator, the description covers the essential purpose and input. The output is implied (size in listed systems). Could optionally mention return format, but overall complete enough for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. The description adds context that the input is in cm, which is valuable. However, it does not provide additional details like valid range or unit confirmation beyond what the schema already indicates (exclusiveMinimum: 0). Baseline is 3 due to low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'calculate', the resource 'hat size', and specifies the output systems (FR/EU, US/UK, S/M/L/XL) and input (head circumference in cm). It uniquely identifies this tool among many sibling calculate_* tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have head circumference and need hat sizes, but does not explicitly state when to use this tool versus alternatives (e.g., other size calculators). No exclusions or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_heart_rate_zonesAInspect

Calculate heart rate training zones Z1-Z5, optionally using Karvonen method. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
max_hrYesMaximum heart rate in bpm
resting_hrNoResting heart rate for Karvonen method (bpm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, placing full burden on description. The description does not disclose behavioral traits beyond stating the calculation; it omits details like side effects (none expected), prerequisites, or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with core purpose, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool with no output schema, the description covers the essential functionality. However, given the existence of similar sibling tools, additional details about zone definitions or output could improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds that the Karvonen method is optional, but this is already implied by the resting_hr parameter description. Minimal added value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates heart rate training zones Z1-Z5, with optional Karvonen method. It distinguishes from siblings like 'calculate_max_heart_rate' and 'calculate_training_zones_running' by being a generic zone calculator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions the Karvonen method as optional but provides no guidance on when to use this tool versus alternatives like 'calculate_training_zones_running'. No explicit when-to-use or when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_heat_indexBInspect

Calculate the apparent temperature (heat index) from temperature and humidity. Returns: {heat_index_c, feels_warmer_by_degrees, risk_level}. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
humidity_pctYesRelative humidity in percent
temperature_cYesAir temperature in degrees C

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as idempotency, side effects, or rate limits. As a pure calculation tool, some transparency is expected, but none is offered beyond the formula.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no extraneous words, making it highly concise and front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the simplicity, the description lacks completeness: it does not mention output format, valid input ranges beyond schema, or edge cases. No output schema exists, so more context is needed for full usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with well-defined parameters (temperature_c in C, humidity_pct in % with bounds). The description merely restates 'from temperature and humidity' without adding new semantic insight, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates heat index (apparent temperature) from temperature and humidity, using a specific verb and resource. It distinguishes itself among many sibling calculate tools by naming the exact output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like wind chill or dew point calculations. No context on prerequisites or when not to use it is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_heat_pump_copCInspect

Compute heat pump Coefficient of Performance (COP). Use for HVAC efficiency analysis. Inputs: heat output kW, electric input kW. Returns COP and seasonal SCOP estimate. See list_bundles for related 'energie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
indoor_tempNoIndoor target °C
outdoor_tempYesOutdoor temperature °C

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description gives no behavioral information such as side effects, return format, or data requirements. The agent has no insight into what the tool does beyond the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (5 words) and concise, but it lacks substance. It does not earn its place beyond stating the resource, making it minimally acceptable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and a simple input schema, the description fails to explain what the tool returns or how the inputs relate. It is incomplete for an agent to use effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are already documented. The description adds no extra meaning beyond the tool's purpose, but the schema itself is adequate. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Heat pump coefficient of performance' identifies the resource but lacks a verb. The tool name implies calculation, but the description alone does not state an action, making purpose somewhat implied rather than explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the dozens of sibling calculate_* tools. There is no mention of context, prerequisites, or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_horse_weightBInspect

Estimate horse weight using Carroll formula from heart girth and body length. Use for vets, feed dosing. Inputs: heart girth cm, body length cm. Returns weight kg. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
body_length_cmYesBody length cm
heart_girth_cmYesHeart girth circumference cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as idempotency, safety, or side effects. It merely states the formula used, which is insufficient for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with the key information. It is concise but could be structured more formally (e.g., bullet points) for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and no annotations, yet the description does not explain the return value, units, or any special cases. For a tool with two parameters, this lack of context makes it incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides full coverage (100%) for both parameters, so the description adds minimal extra meaning beyond the schema. The mention of 'Carroll formula' implies that both measurements are needed, but no further parameter details are added.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Estimate'), the resource ('horse weight'), and the method ('Carroll formula'). This distinguishes it from sibling 'calculate_*' tools which are general.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. There is no mention of prerequisites, limitations, or scenarios where other tools might be preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_hourly_costCInspect

Compute fully-loaded hourly cost-to-company. Use for project pricing or freelance rate. Inputs: monthly salary, social charges %, billable hours/month. Returns true hourly cost. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
work_daysNoWorking days/year
charges_pctNoEmployer charges %
annual_grossYesAnnual gross salary EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must disclose behavioral traits. It does not mention that the tool uses default values (work_days=218, charges_pct=45), nor does it specify the calculation formula, rounding behavior, output unit, or assumptions (e.g., company perspective). This is a significant gap for a simple but non-trivial calculation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (four words) but lacks critical information such as formula or output explanation. While brevity is valued, it comes at the cost of completeness. The description would benefit from one or two additional sentences to convey essential context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (three parameters, no output schema), the description fails to provide a complete picture. It does not explain what the tool returns (e.g., hourly cost in EUR per hour), the formula used (annual_gross / work_days * (1 + charges_pct/100)), or any limitations. The description leaves the agent with unanswered questions about usage and output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with each parameter having a clear description (e.g., annual_gross: 'Annual gross salary EUR'). The description adds no additional meaning beyond the schema, which is adequate for understanding parameters. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Hourly cost to company' combined with the tool name clearly indicates the tool calculates the hourly cost to an employer given annual salary and charges. It is specific to a company cost perspective, distinguishing it from personal hourly wage calculators like calculate_salary_hourly_to_annual. However, it could explicitly state the verb 'calculate' for clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as calculate_salary_hourly_to_annual or calculate_employer_cost_fr. The description offers no context for selecting this tool over siblings, and no when-not-to-use information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_housing_aidAInspect

Estimate French housing aid (APL — Aide Personnalisee au Logement). Returns: {rent, rent_ceiling, estimated_apl, note}.

ParametersJSON Schema
NameRequiredDescriptionDefault
rentYesMonthly rent in euros
city_zoneNoCity zone: 1 (Paris/IDF), 2 (large cities), 3 (rural)2
household_sizeNoNumber of people in household (1-6)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must disclose behavior. 'Estimate' implies a read-only calculation, which is accurate, but no details about side effects, data usage, or safety are given. For a calculation tool, this is minimally adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that conveys the tool's purpose without any unnecessary words. It is front-loaded with the key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is adequate for a simple calculator, but it does not explain the output format or any special behavior. Since there is no output schema, the description could provide more context about what the estimate returns (e.g., monthly amount in euros).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents each parameter's meaning. The description adds no extra context about how parameters affect the calculation, but the baseline is appropriate given full coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates French housing aid (APL), a specific calculation. The verb 'estimate' combined with the specific program name clearly distinguishes it from hundreds of sibling calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention prerequisites, conditions, or any context for use despite the tool being part of a large set of similar calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_housing_loan_comparisonBInspect

Compare multiple mortgage offers sorted by total cost. Returns: {offers_count, best_offer, comparison}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
offersYesList of mortgage offers to compare
loan_amountYesLoan amount in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden of behavioral disclosure. It states that the tool compares and sorts, but does not mention whether it is read-only, what the output contains, or any side effects. Being a calculation tool, it is likely non-destructive, but this is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that conveys the essential purpose without any wasted words. It is front-loaded and efficiently communicates the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (two parameters, no output schema), the description is adequate but not fully complete. It does not explain what 'total cost' includes (e.g., interest, insurance) or the output format, leaving some ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with clear descriptions for all parameters (e.g., rate, bank_name, duration_years, insurance_rate, loan_amount). The description adds no additional meaning beyond the schema, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Compare multiple mortgage offers sorted by total cost' clearly specifies the verb (compare), resource (mortgage offers), and outcome (sorted by total cost). It is distinct from sibling tools, which cover a wide range of calculations, but this one is specific to mortgage comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives, nor does it mention prerequisites or scenarios where it might not be suitable. It is a single sentence with no usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_hydrationBInspect

Compute recommended daily fluid intake by weight, activity, and weather. Use for athletes and outdoor workers. Inputs: weight kg, activity hours, temperature °C. Returns L/day and electrolyte recommendation. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
climateNoClimate/environmenttemperate
weight_kgYesBody weight in kilograms
activity_minutesNoDaily exercise duration in minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states the calculation purpose, omitting details such as the formula used, output units (e.g., liters), or how climate affects results. No behavioral insights beyond the basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, direct sentence that precisely conveys the tool's purpose with no extraneous words. It is optimally concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with fully documented parameters, the description provides the essential purpose. However, it lacks usage guidance and behavioral details, making it only adequate rather than complete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for each parameter. The description simply lists the input factors ('weight, activity and climate'), adding no new semantic information beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates 'daily water intake needs' based on specific factors (weight, activity, climate), distinguishing it from generic calculation tools. It directly addresses the tool's verb and resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not indicate when to use this tool versus alternatives like 'calculate_water_intake' or other hydration tools, nor does it offer context on prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_hydraulic_pressureCInspect

Compute hydraulic system pressure P=F/A. Use for hydraulic design. Inputs: force N, area m². Returns pressure in Pa, kPa, bar, psi. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
force_nYesForce N
area_cm2YesPiston area cm²

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description fails to disclose any behavioral traits. It does not mention if it's a pure calculation, what units are used in output, or any edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single phrase but is under-specified. Conciseness should not sacrifice informativeness; here it is too brief to be helpful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two required parameters and no output schema or annotations, the description lacks completeness. It does not explain the formula, output units, or any usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with parameter descriptions ('Force N', 'Piston area cm²'), so description adds no extra meaning beyond schema. Baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description 'Hydraulic system pressure' is a noun phrase lacking a verb. It does not state the action (e.g., 'Calculates') or differentiate from siblings like 'calculate_pressure_convert'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool, what prerequisites exist, or how it differs from alternatives. Sibling tools exist but no differentiation provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_hyperfocal_distanceBInspect

Calculate hyperfocal distance and near/far sharp limits for a lens and aperture. See list_bundles for related 'photographie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
coc_mmNoCircle of confusion in mm (default 0.03 for full frame)
apertureYesAperture f-number
focal_length_mmYesLens focal length in millimeters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It only states what is calculated, not how (e.g., default CoC, return format, unit handling). Insufficient for a tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. Efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and related sibling tools (e.g., calculate_depth_of_field), the description lacks context about what hyperfocal distance means, formula used, or usage scenarios. Minimal for a specialized calculation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no extra meaning beyond what the schema already provides, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates hyperfocal distance and near/far sharp limits, with specific verb and resource. It distinguishes from siblings like calculate_depth_of_field by naming unique terms.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like calculate_depth_of_field or other photography calculators. Missing context about typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ideal_gasAInspect

Solve PV=nRT. Provide any 3 of: pressure_pa, volume_m3, moles, temperature_k. R=8.314. Returns: {error}. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
molesNoAmount in mol
volume_m3NoVolume in m³
pressure_paNoPressure in Pa
temperature_kNoTemperature in K

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavior. It states the formula and the constant R=8.314, but does not mention edge cases (e.g., zero values) or output format. Still, it is fairly transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no redundant information. Each sentence serves a purpose: stating the formula and specifying input requirements. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the tool's purpose and input constraints well. Without an output schema, it could clarify that the tool returns the missing variable, but the overall context is sufficient for an agent to understand the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions including units, but the description adds the critical usage pattern 'provide any 3 of' and reiterates units, adding meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool solves the ideal gas law (PV=nRT) and specifies the four variables with units. It is distinct from the many sibling calculation tools by focusing on a specific physics equation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells users to provide any 3 of the 4 parameters, which is clear guidance. It does not mention when not to use this tool or alternatives, but the context is sufficient for correct usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ideal_weightCInspect

Estimate ideal body weight using Lorentz and Devine formulas. Returns: {lorentz_kg, devine_kg, average_kg}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sexYesBiological sex
height_cmYesHeight in centimeters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavior. It only states 'estimate...using formulas', which is minimal. It does not mention that this is a pure calculation (no side effects), what output to expect (e.g., a number in kg), or limitations (e.g., not for children). The description adds little value beyond the tool's name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 7 words, very concise. However, its brevity comes at the cost of missing important usage and behavioral details. For a simple calculation tool with well-named parameters, this level of conciseness is acceptable but not optimal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and no annotations, so the description should provide information about return values (e.g., weight in kg) and possibly the formulas. It fails to do so, leaving the agent uncertain about what the tool returns. The description is incomplete for an effective decision.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for both parameters (height_cm, sex). The description mentions 'Lorentz and Devine formulas' but does not add meaning beyond the schema, such as how the formulas interact with the parameters. Baseline of 3 is appropriate since the schema already documents the parameters adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates ideal body weight using Lorentz and Devine formulas. The verb 'estimate' is appropriate, and specifying the formulas adds specificity. However, it does not differentiate from the sibling 'calculate_ideal_weight_range' or other body weight calculators like 'calculate_bmi', leaving ambiguity about when to use this particular tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given the large number of sibling calculators (e.g., BMI, body fat, BMR), the absence of usage context is a significant gap. There are no explicit when-to-use or when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ideal_weight_rangeCInspect

Compute healthy weight range based on BMI 18.5-24.9. Use for nutrition planning. Inputs: height cm. Returns ideal weight range (min, max) in kg and lb. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sexYesBiological sex
frameYesBody frame size
height_cmYesHeight in cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only restates the tool's name and offers no details on output format, side effects, or how multiple methods are handled. The agent cannot infer behavior beyond the basic calculation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words. It is concise and front-loaded, but could be more informative without sacrificing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 required parameters, no output schema, and no annotations, the description should explain what the output looks like (e.g., a range of values, methods used). It is too vague for the agent to understand the result format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 3 parameters, so the schema already explains their meaning. The description adds no additional context. At baseline 3, this is adequate but not enhanced.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates an ideal weight range using multiple methods, which is a specific verb+resource. However, it does not differentiate from the sibling tool 'calculate_ideal_weight' (singular), which might calculate a single value rather than a range.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'calculate_ideal_weight', 'calculate_bmi', or other health-related calculators. There is no mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_impermanent_lossAInspect

Calculate impermanent loss for a DeFi liquidity pool position when price ratio changes. Returns: {value_in_pool_ratio}. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
price_ratio_changeYesPrice ratio change multiplier (e.g. 2.0 if token doubled in price, 0.5 if halved)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must fully disclose behavioral traits. It only states the calculation function without detailing what the output represents (e.g., percentage, value), any assumptions (e.g., pool composition), or side effects. This is insufficient for the agent to fully understand behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 13-word sentence, extremely concise and front-loaded. Every word serves a purpose with no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the tool's simplicity, the description fails to specify the output format (e.g., percentage, decimal) or any assumptions (e.g., 50/50 pool weighting). Since no output schema exists, the description should compensate but does not, leaving the return value ambiguous.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing a baseline of 3. The description adds no additional meaning beyond the schema's description of 'price_ratio_change' as a multiplier. It does not elaborate on the parameter's role or format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates impermanent loss for a DeFi liquidity pool position when price ratio changes. The verb 'Calculate' and resource 'impermanent loss for a DeFi liquidity pool position' are specific and differentiate it from numerous sibling calculation tools with different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly provides context by stating 'when price ratio changes', indicating the condition for using the tool. However, it lacks explicit guidance on when not to use or alternatives, though the unique purpose reduces ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_inflation_adjusted_valueCInspect

Compute real (inflation-adjusted) purchasing power of a future amount. Use for retirement or savings goal in today's euros. Inputs: nominal future amount, years, average inflation %. Returns real value. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsYesNumber of years
amountYesAmount in EUR
inflation_rateYesAnnual inflation rate percent

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It only states a basic calculation purpose, without mentioning assumptions (e.g., formula used), limitations, or side effects. The agent is left to infer that this is a safe read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no waste, front-loading the main action. However, it is extremely brief and could be expanded slightly without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (3 parameters, no output schema), the description provides the core purpose but lacks context about the result interpretation (e.g., 'purchasing power' in what units), edge cases, or formula details. It is adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides full descriptions for all 3 parameters (amount in EUR, years, inflation rate percent), achieving 100% coverage. The description adds no additional meaning beyond schema, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate real purchasing power after inflation' clearly states the verb (calculate) and resource (real purchasing power after inflation). It is specific, but it does not differentiate from sibling tools like 'calculate_inflation_adjustment' or 'calculate_purchasing_power', which may have overlapping purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention any context, prerequisites, or exclusions, leaving the agent uninformed about optimal usage scenarios among the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_inflation_adjustmentCInspect

Adjust a nominal amount to a target year using a constant inflation rate. Use for real-value comparisons across time. Inputs: amount, inflation rate %, years. Returns adjusted value and total inflation factor. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsYesNumber of years
amountYesOriginal amount
inflation_rateYesAnnual inflation rate in %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for behavioral disclosure. It only states the high-level purpose but does not explain the calculation method (e.g., simple vs. compound inflation adjustments), potential limitations, or what the returned value represents.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, making it very concise and easy to parse. However, it could be slightly more precise or include additional context without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema and no annotations, the description should provide more context about the return value and usage constraints. It fails to indicate whether the result is a future value or an adjusted amount, and it does not cover edge cases or parameters beyond the basics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters. The description 'Adjust an amount for inflation over time' adds minimal semantic value beyond the schema, which already defines 'amount', 'inflation_rate', and 'years' with their meanings.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Adjust an amount for inflation over time,' indicating a specific verb and resource. However, there is a sibling tool 'calculate_inflation_adjusted_value' with an almost identical name, and the description does not differentiate between them, which would help an agent select the correct tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as the similar sibling 'calculate_inflation_adjusted_value'. There is no mention of prerequisites, when not to use it, or context for its application.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_inheritance_taxBInspect

Calculate French inheritance tax (droits de succession) based on relationship and amount. Returns: {amount, abatement, taxable_base, tax_due, effective_rate_pct, marginal_rate_pct}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesInherited amount in euros
relationshipYesRelationship to deceased: spouse, child, sibling, other

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description adds no behavioral details beyond the schema. It does not disclose whether the calculation includes tax brackets, deductions, or limitations, which is important for a jurisdiction-specific tax tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no superfluous words. It is front-loaded with the tool's core purpose and immediately actionable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should explain what is returned (e.g., tax amount). It also lacks details on French tax rules, brackets, or year applicability. However, given the schema's clarity on inputs, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have descriptions in the schema, achieving 100% coverage. The description only restates the parameter names and purpose ('based on relationship and amount'), adding no extra meaning or constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Calculate') and resource ('French inheritance tax'), and clearly indicates it is based on relationship and amount. This distinguishes it from sibling tools like 'calculate_french_income_tax' or other tax calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other tax calculators). It does not mention prerequisites, exclusions, or context, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_insulation_rBInspect

Compute thermal resistance R = thickness/lambda. Use for insulation specification (RT2020/RE2020). Inputs: thickness m, conductivity λ W/m·K. Returns R-value m²·K/W. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
lambdaYesConductivity W/(m.K)
thickness_mmYesThickness mm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states the formula. It does not disclose output units (e.g., m²K/W), error handling, or behavior with invalid inputs (though schema provides min constraints). The agent is left guessing about the return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one sentence), but it omits important details like output units and usage context. It is not front-loaded with critical information; compactness comes at the cost of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema), the description fails to specify the output unit or any caveats. While the formula is given, the agent cannot determine if the result is, for example, in K/W or m²K/W, leaving the context incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds value by showing the relationship between thickness and lambda via the formula, but does not introduce new semantic information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates thermal resistance R using the explicit formula R = thickness/lambda. This is a specific verb-resource-action pair, and the formula distinguishes it from similar sibling tools like calculate_insulation_r_value.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., calculate_insulation_r_value) or any prerequisites/conditions. The description lacks any usage context, leaving the agent to infer applicability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_insulation_r_valueAInspect

Calculate thermal R-value: R = thickness/lambda. Compare with RE2020 targets. Returns: {lambda_w_mk, r_value_m2KW, re2020_targets, walls_ok, roof_ok, floor_ok}. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
lambdaYesConductivity W/(m·K) — mineral wool ~0.035, polyurethane ~0.025
thickness_mmYesInsulation thickness in mm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey all behavioral context. It gives the formula and a comparison target, but lacks details about output format, units, or behavior for edge cases (e.g., invalid inputs beyond schema constraints).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two short sentences (14 words) that present the core purpose and an additional context (RE2020). No redundant or extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations or output schema, the description provides the formula and a practical use case. It could explicitly state the output (e.g., R-value in m²·K/W), but for a simple tool, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already covers both parameters with descriptions. The description adds value by providing the formula and example conductivity values (mineral wool ~0.035, polyurethane ~0.025), which enriches understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes thermal R-value using the formula R = thickness/lambda. It mentions a specific use case (RE2020 comparison), but does not differentiate from the sibling tool 'calculate_insulation_r'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for calculating R-values and comparing with RE2020 targets, but provides no explicit guidance on when to use this tool over alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_insurance_estimateBInspect

Estimate annual car insurance from vehicle value, driver age and bonus-malus. Returns: {annual_premium_eur, monthly_eur, note}. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
driver_ageYesDriver age
vehicle_valueYesVehicle value EUR
bonus_malus_coefficientNoBonus-malus (0.5=best, 3.5=worst)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description should disclose behavioral traits. It only states it estimates insurance but doesn't mention accuracy, data sources, side effects, or constraints beyond schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single concise sentence of 12 words, front-loaded with action and key information. No extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should clarify return format. It mentions 'annual' estimate but doesn't specify units, currency, or whether it returns a number or object. Adequate but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description merely lists parameters without adding new meaning or clarifying edge cases, meeting minimum threshold.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Estimate', resource 'annual car insurance', and specific inputs (vehicle value, driver age, bonus-malus). Distinct from siblings like calculate_travel_insurance_estimate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Lacks context about prerequisites or scenarios where it's appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_international_shippingCInspect

Estimate international shipping cost and delivery time by carrier and weight. Use for e-commerce or expat shipping. Inputs: from-country, to-country, weight kg, dimensions. Returns cost range and lead time. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
from_zoneYesDestination zone
weight_kgYesActual parcel weight in kg
dimensions_cmYesParcel dimensions in cm (length, width, height)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It states the calculation method but omits many details: whether it uses actual vs volumetric weight (whichever is greater), currency of cost, taxes/fees, error handling for invalid inputs, or output format. This leaves significant ambiguity for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no filler. It is appropriately concise and front-loads the key information. Every word is meaningful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 3 parameters (including a nested object) and no output schema, the description is very brief. It does not explain the return value (e.g., cost in what currency, any restrictions), assumptions (e.g., whether volumetric weight is used only if greater than actual), or edge cases. This is insufficient for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters, so baseline is 3. The description adds context that volumetric weight and destination zone are used, which is helpful but does not deepen understanding of parameter meaning beyond what the schema already provides (e.g., 'from_zone' is an enum, weight_kg is in kg, dimensions_cm are length/width/height).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates international shipping cost using volumetric weight and destination zone. It specifies the verb and resource, but does not explicitly differentiate from sibling tools like calculate_delivery_cost or calculate_shipping_volumetric, though 'international' and 'destination zone' provide some distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. With many shipping-related siblings, the description should mention scenarios (e.g., 'Use when shipping internationally and needing volumetric weight calculation') but it does not. No exclusions or prerequisites are discussed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_inventory_eoqCInspect

Compute Economic Order Quantity (Wilson formula). Use for supply chain optimization. Inputs: annual demand, order cost, holding cost per unit. Returns EOQ and orders/year. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
order_costYesCost per order
holding_costYesAnnual holding cost per unit
annual_demandYesAnnual demand units

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description fails to disclose what the tool does (e.g., calculate EOQ), return type, or side effects. The behavioral burden is not met.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise but at the cost of utility. A single phrase is too sparse to be considered well-structured or informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks explanation of what the tool returns or how it behaves. With no output schema, the description should compensate but does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with adequate parameter descriptions. The description adds no extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description repeats the tool name 'Economic Order Quantity' without adding a verb or clarifying action. It is a tautology, barely stating the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternative inventory or calculation tools. No context or prerequisites provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_inventory_turnoverCInspect

Compute inventory turnover ratio = COGS / avg inventory. Use for retail efficiency analysis. Inputs: COGS, average inventory value. Returns turnover and days-on-hand. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cogsYesCost of goods sold
avg_inventoryYesAverage inventory value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description does not disclose any behavioral traits (e.g., read-only, side effects). For a simple calculation, the lack of disclosure is minimal but still absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short phrase, which is concise but under-specifies the tool. It is front-loaded but lacks substantive information to justify its brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and no description of the return value, the description is incomplete. For a simple calculator, expectations are low, but the description fails to explain what the tool returns or how results are formatted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover 100% of parameters adequately. The description 'Inventory turnover ratio' adds no extra meaning beyond the schema, resulting in no added value. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Inventory turnover ratio' is a noun phrase that essentially restates the tool name without a verb. It does not clearly state that the tool calculates the ratio, which is the intended action. It is borderline tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. There is no indication of when to use this tool over similar financial calculators, nor any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_jet_lag_recoveryAInspect

Estimate jet lag recovery time based on timezone difference and direction of travel. Returns: {direction, tips}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
timezone_diff_hoursYesTimezone difference in hours (positive = eastward, negative = westward)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It mentions key factors but does not explain assumptions, accuracy, or how direction affects recovery. The parameter sign already implies direction, but transparency is limited.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, concise sentence that front-loads purpose and key factors. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description explains input and purpose. However, it does not describe the output format or units, which could cause uncertainty. Minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter. The tool description adds context linking timezone difference to direction, but no new semantic information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates jet lag recovery time based on timezone difference and direction of travel. It uses a specific verb and resource, distinguishing it from many sibling calculators that focus on other domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies usage for jet lag estimation but does not mention exclusions or other relevant tools like timezone calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_kinetic_energyBInspect

Compute kinetic energy KE=½·m·v². Use for physics, vehicle safety analysis. Inputs: mass kg, velocity m/s. Returns kinetic energy in joules. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
mass_kgYesMass in kg
velocity_msYesVelocity in m/s

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the function (calculating kinetic energy) but does not explicitly confirm it is a read-only, side-effect-free operation. However, for a simple calculator, the behavior is implied, so a middle score is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (5 words) and front-loaded. However, it is a noun phrase fragment rather than a complete sentence, slightly reducing clarity. It earns its place but could be more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description should at least mention the output unit (joules) or the formula. It does not, leaving ambiguity about the return value. This is a gap for a physics calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters (mass_kg and velocity_ms). The description adds no additional meaning beyond the schema, such as the formula or default units for output. Baseline 3 is correct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Kinetic energy of a moving object' clearly identifies the resource and context, and the tool name includes 'calculate', so the action is implied. However, it lacks an explicit verb, which would make it fully specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings like calculate_energy_physics or calculate_force. No usage context or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_knitting_yarnCInspect

Calculate yarn needed for a knitting project (meters and number of 50g/100m balls). Returns: {meters_of_yarn, balls_50g_100m}. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeYes
projectYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. It only states calculation purpose but omits assumptions (e.g., gauge, yarn weight), limitations, or accuracy. The agent gains no insight into how the calculation works.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the purpose. However, it lacks structural elements like lists or examples that could improve usability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite few parameters and no output schema, the description fails to explain the return format clearly, assumptions, or how the inputs affect the output. It leaves significant gaps for a practical calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% enum coverage but 0% description coverage. The description adds no meaning to the parameters 'project' and 'size'; it doesn't explain enum values like 'scarf' or 'S'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates yarn needed for a knitting project, specifying output in meters and number of 50g/100m balls. It uses a specific verb and resource, distinguishing it from sibling calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_laundry_costCInspect

Calculate weekly and annual laundry cost (electricity + water + detergent). Returns: {per_load_eur, weekly_eur, annual_eur}. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
loads_per_weekYesLoads per week
water_liters_per_loadNoLiters per load (default 50)
detergent_cost_per_loadNoDetergent EUR/load (default 0.30)
electricity_kwh_per_loadNokWh per load (default 1.2)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It does not disclose how costs are computed (e.g., assumed electricity rate, water rate). The default values in the schema are mentioned but not explained in context, leaving ambiguity about hidden constants.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the core purpose without unnecessary words. Could include more detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of four parameters and no output schema, the description lacks crucial information about the cost calculation method and underlying assumptions (e.g., electricity and water rates). This incompleteness could lead to user confusion.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds minimal extra meaning beyond what the schema provides, merely grouping components but not explaining the calculation logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates weekly and annual laundry cost covering electricity, water, and detergent. It has a specific verb and resource. However, it does not distinguish itself from similar sibling tools like calculate_electricity_cost or calculate_water_bill.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There are no exclusions or mentions of related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_lawn_mowing_frequencyBInspect

Calculate recommended lawn mowing interval based on grass type, season and rainfall. See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
seasonYesCurrent season
grass_typeYesType of grass: cool_season (fescue/rye), warm_season (bermuda/zoysia), or mixed
weekly_rainfall_mmNoAverage weekly rainfall in mm (default 25mm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description does not disclose behavioral traits such as return format, assumptions, or limitations. For a simple calculator, minimal disclosure but still lacking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb and resource, no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple parameterized calculation, description covers purpose and inputs. No output schema exists, but the output type (interval) is implicit. Minor gap: no explicit output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all 3 parameters with descriptions. Description adds no extra semantics beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Calculate'), specific resource ('lawn mowing interval'), and key inputs (grass type, season, rainfall). Distinct from many sibling 'calculate_*' tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs. other calculation tools. Does not mention prerequisites or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_lawn_seedDInspect

Compute grass seed quantity (kg) for a lawn area at recommended seeding rate. Use for landscaping. Inputs: area m², seed rate g/m². Returns seed kg. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
area_m2YesLawn area m²
rate_g_m2NoSeeding rate g/m²

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist. The description provides no behavioral details beyond the minimal name. It doesn't disclose that it is a calculation tool, idempotent read-only operation, or what the output represents.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 2 words, but under-specified. It does not earn its place as it provides negligible information. Could be more informative without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite low complexity (2 params, no output schema), the description fails to state the tool's purpose or what it returns. A complete description would say 'Calculates lawn seed quantity required for a given area and seeding rate.'

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with descriptions for both parameters (area_m2 and rate_g_m2). The description adds no additional meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description is 'Lawn seed quantity' which is a vague restatement of the tool name. It lacks a verb and specific resource. Among many calculation siblings, it doesn't clarify that it calculates the total seed needed based on area and rate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool or when to use alternatives like 'calculate_seed_quantity' or other gardening calculators. No context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_leave_daysBInspect

Calculate French paid leave (congés payés): 2.5 days/month, max 25 working days/year. Returns: {accrual_per_month, accrued_days, capped_at_max, max_annual_days, days_to_max, months_to_max}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
start_dateYesYYYY-MM-DD — Employment start date
months_workedYesMonths worked in the reference period (1-12)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description is the sole source for behavior. It gives the formula but does not disclose that it requires start_date and months_worked as inputs, nor does it mention return type, error handling, or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that efficiently conveys the core purpose, the calculation rule, and the cap. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator tool with no output schema and no annotations, the description covers the essential rule and limit. It is adequate for an AI agent to understand what the tool computes, though missing details on return format and edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters documented in schema). The description adds the formula context but does not provide additional semantic meaning for the parameters beyond what the schema already offers (e.g., start_date format, months_worked range).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates French paid leave (congés payés) with the specific accrual rate (2.5 days/month) and cap (max 25 working days/year). It is distinct among siblings due to the French-specific context and formula.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other leave-related tools (e.g., calculate_vacation_days_fr, calculate_maternity_leave_fr). The description does not mention alternatives or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_led_savingsCInspect

Compute energy and money saved by switching from incandescent/halogen to LED. Use for energy audit. Inputs: old wattage, LED wattage, daily hours, kWh price, bulbs. Returns yearly savings. See list_bundles for related 'energie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
led_wYesLED replacement wattage
old_wYesOld bulb wattage
hours_dayNoHours per day
num_bulbsNoNumber of bulbs
price_kwhNoEUR/kWh

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries the full burden. The single sentence does not disclose any behavioral traits such as side effects, read-only nature, or limitations. For a simple calculation tool, this is minimal but not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one short sentence) but lacks structure. It is not verbose, yet it could be more informative without sacrificing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and the description does not mention return values, units, or calculation assumptions. For a simple tool, this is acceptable but incomplete for full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters already described (wattage, hours, bulbs, price per kWh). The description adds no additional context, which is acceptable per guidelines when schema is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Savings from switching to LED bulbs' clearly indicates the tool's purpose related to calculating savings from LED conversion. The name reinforces this. While it lacks a verb, the intent is unambiguous and distinct from sibling calculator tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other calculation tools. There is no mention of prerequisites, context, or alternatives, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_life_path_numerologyBInspect

Calculate numerology life path number from birth date. Returns: {life_path_number, meaning}. See list_bundles for related 'fun' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
birth_dateYesBirth date in YYYY-MM-DD format

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description only says 'calculate', implying a read-only computation, but does not disclose potential behavioral traits such as input validation rules, error handling, or limits on date ranges. With no annotations, the description carries the full burden and fails to provide transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, compact sentence with no filler. Every word is meaningful and the structure is front-loaded with the verb and object.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description does not explain what the tool returns (e.g., an integer, a string, a breakdown). It also provides no context about potential errors or limitations. For a simple tool, this is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter 'birth_date' already has a clear description (YYYY-MM-DD format). The tool description adds no new semantics beyond what the schema provides, so it meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the specific resource 'numerology life path number', distinguishing it from numerous sibling 'calculate_' tools covering different topics. No ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, when not to use it, or any prerequisites. It merely restates the obvious function, lacking explicit usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_light_yearCInspect

Convert between light-years and km/miles. Use for astronomy. 1 ly = 9.461×10¹² km. Inputs: value, from, to. Returns converted distance. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitYesInput unit
valueYesValue

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It only states 'Light year conversions' without any information about side effects, safety, or limits. The agent cannot infer whether this tool is read-only or has any constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (two words) but sacrifices clarity and completeness. While efficient, it does not earn its place as a useful standalone description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description is too sparse. It does not explain the conversion output or any edge cases. For a simple conversion tool, more context is needed to ensure correct agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described ('Value' and 'Input unit') and an enum for unit. The description adds no additional semantic information beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Light year conversions' is a noun phrase, lacking a verb. It vaguely indicates the tool deals with light year conversions but does not specify that it converts between multiple astronomical units (ly, parsec, au, km). The schema clarifies the units, but the description alone is insufficient for an agent to understand the exact purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'convert_distance' or 'calculate_light_year_distance'. The description does not mention context, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_light_year_distanceCInspect

Convert astronomical distances between light-years, parsecs, AU, km. Returns: {light_years, parsecs, au, km, original}. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesDistance value
from_unitYesSource unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral aspects. It only says 'convert' with no mention of precision, error handling, authentication, or whether it converts to all units or one. Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb and specific resource, no extra words. Highly concise and structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool but missing critical behavior: no output schema or description of what the tool returns (presumably multiple conversions?). Does not clarify if there are default output units or if all converted values are returned. Incomplete for agent execution.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with 'Distance value' and 'Source unit' descriptions. The tool description adds context about the units but is redundant with the enum values. Baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it converts astronomical distances among specified units (light-years, parsecs, AU, km). However, it does not differentiate from sibling tools like convert_distance or calculate_light_year, so it's clear but not distinctive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as convert_distance or other calculate_* tools. The description simply states the function without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_linear_regressionCInspect

Calculate linear regression slope and intercept from summary statistics. Returns: {error}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
nYesNumber of data points
sum_x2YesSum of (xi-x_mean)²
sum_xyYesSum of (xi-x_mean)*(yi-y_mean)
x_meanYesMean of x values
y_meanYesMean of y values

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fails to disclose important behavioral traits such as that the tool is read-only, requires n >= 2, or that it returns only slope and intercept. The description provides minimal safety information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, front-loaded with the action. It is efficient but could benefit from slight expansion without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 5 parameters with full schema coverage and no output schema, the description is adequate for a simple computation but does not cover output format or edge cases like insufficient data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema descriptions adequately define each parameter. The description does not add additional meaning beyond stating 'from summary statistics', so baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates linear regression slope and intercept from summary statistics. However, it does not distinguish itself from sibling tools like calculate_statistics or calculate_z_score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It does not mention that it requires precomputed summary statistics instead of raw data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_lmnp_amortizationAInspect

Calculate LMNP rental property amortization and annual tax deduction (French tax regime). See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_rentYesAnnual gross rental income in EUR
property_valueYesProperty purchase price excluding land in EUR
furniture_valueYesFurniture and equipment value in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the calculation purpose but does not disclose any behavioral traits such as being read-only, expected output format, or any side effects. For a calculation tool, this is minimally acceptable but lacking detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that conveys the essential purpose without unnecessary words. It is front-loaded with the key action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (3 parameters, no output schema, no annotations), the description is adequately complete for its purpose. However, adding a brief note on the return value or typical use cases would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with each parameter having a clear description and minimum constraint. The tool description adds no additional meaning beyond what the schema already provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies the verb 'Calculate' and the resource 'LMNP rental property amortization and annual tax deduction' with the context of French tax regime. It distinguishes itself from siblings like 'calculate_lmnp_deficit' by focusing on amortization and deduction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for French LMNP amortization calculations but does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention when not to use it. No exclusions or alternative tools are referenced.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_lmnp_deficitCInspect

Calculate LMNP (non-professional furnished rental) tax deficit. Returns: {total_deductible, deficit, note}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_rentYesAnnual rental income in EUR
annual_chargesYesAnnual deductible charges in EUR
depreciation_annualYesAnnual depreciation amount in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits beyond the action. It does not mention what the output is, how the calculation works, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words, but it is borderline too sparse. Nonetheless, it is appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a tax deficit calculation and the lack of an output schema, the description is incomplete. It does not explain the context (French-specific tax) or what the result represents.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add additional meaning beyond what is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and the resource 'LMNP tax deficit'. However, it does not differentiate from sibling tool calculate_lmnp_amortization, which is closely related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, such as calculate_lmnp_amortization. No mention of prerequisites or typical scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_loan_early_repaymentCInspect

Calculate interest savings from early partial loan repayment. Returns: {months_saved, new_months_remaining, interest_savings_eur, early_repayment_eur}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
early_amountYesEarly repayment amount EUR
monthly_paymentYesCurrent monthly payment EUR
months_remainingYesMonths remaining
remaining_capitalYesRemaining loan capital EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden. It only states the calculation without disclosing assumptions, formula, limitations, or any side effects. For a calculation tool, this is minimally transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with no wasted words, but it lacks structure and critical information could be front-loaded. It is adequately sized for a simple tool but could be improved.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description should hint at the return value or method, but it only says 'interest savings'. It does not explain the calculation logic or what output format to expect, leaving context incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds no new meaning beyond what is already in the input schema. It does not provide additional context or examples for the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates interest savings from early partial loan repayment, using a specific verb and resource. It is distinct from sibling tools like calculate_loan_payment, but does not explicitly differentiate itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as other loan calculators. There is no mention of prerequisites, assumptions, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_loan_paymentAInspect

Calculate monthly loan payment for any generic loan. Returns: {principal, monthly_payment, total_cost, total_interest}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthsYesLoan duration in months
principalYesLoan amount
annual_rateYesAnnual interest rate in %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description implies a read-only calculation. It does not disclose formula details or assumptions, but for a simple financial calculation, the minimal information is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. Concise and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator tool with 3 parameters and no output schema, the description is sufficient. Could mention standard formula used, but not necessary for clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions. The description adds no extra meaning beyond what the schema already provides, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Calculate monthly loan payment for any generic loan', using a specific verb and resource. It distinguishes itself from many sibling 'calculate_*' tools by specifying loan payment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_mortgage' or 'calculate_loan_early_repayment'. The description lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_loan_to_valueCInspect

Compute Loan-to-Value (LTV) ratio for mortgage risk. Use for mortgage application or PMI thresholds. Inputs: loan amount, property value. Returns LTV %, risk level, PMI required y/n. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
loan_amountYesLoan amount EUR
property_valueYesProperty value EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not explain how the LTV ratio is computed or what risk levels are used. With no annotations and no output schema, the agent has no information about output format, risk classification thresholds, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short phrase, but it lacks structure and completeness. It is concise but should provide more context to be helpful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 parameters, no output schema, no annotations), the description should at least hint at the formula or risk categorization. The current description is too minimal for an agent to understand the tool's full behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are fully described in the input schema (loan amount and property value in EUR). The description adds no additional meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool calculates LTV ratio and risk level, with input parameters indicating it's for real estate. However, it does not differentiate from similar siblings like 'calculate_cac_ltv_ratio' which could cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidelines on when to use this tool versus alternatives. It does not mention typical use cases or scenarios, nor does it reference related tools such as 'calculate_debt_to_income' or 'calculate_debt_service_ratio'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_logarithmAInspect

Calculate logarithm in any base (natural, common, binary). Returns: {result}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
baseNoLog base: e=natural, 10=common, 2=binarye
valueYesValue to take log of

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It only states the basic operation without explaining error handling, domain constraints (e.g., value must be positive), or output behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words, front-loading the core purpose. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple math tool, the description combined with the schema provides sufficient context for an agent to use it properly. However, it lacks explicit mention of output type, but that is often inferred.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema; the bases and value are already described in the schema. It does not elaborate on parameter semantics further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates logarithms in any base (natural, common, binary), which is a specific verb-resource combination. It distinguishes itself from numerous sibling calculation tools by focusing specifically on logarithm calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for logarithm calculations but provides no explicit guidance on when to use this tool versus alternatives (e.g., other calculation tools). No exclusions or when-not conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_lottery_oddsBInspect

Compute the odds of winning a lottery for various prize tiers. Use for awareness, education. Inputs: numbers to pick, total numbers, bonus number config. Returns probability and 1-in-N for each tier. See list_bundles for related 'jeux-probabilites' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bonus_poolNoSize of the bonus number pool (default 0, no bonus)
bonus_numbersNoNumber of bonus/powerball numbers to match (default 0)
total_numbersYesTotal numbers in the main pool
numbers_to_pickYesHow many numbers you pick

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It merely states the function without disclosing behavioral traits like handling bonus numbers or output format (odds vs probability). Minimal transparency beyond schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 8 words, no redundancy. Efficiently communicates purpose without extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple calculation tool, but missing output description (no output schema). Agent might not know if result is decimal, fraction, or percentage. Not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with descriptions. Description adds no additional meaning beyond 'any number pool and pick count', which is generic. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates lottery win odds, specifying the resource (lottery) and action (calculate odds). It distinguishes from sibling tools as no other lottery-specific tool exists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives, such as other probability tools (e.g., calculate_dice_probability). Description only implies general use but lacks when-not or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_luggage_weightAInspect

Calculate total luggage weight and compare to airline limits (carry-on, economy checked, business checked). Returns: {total_kg, status}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
itemsYesArray of luggage items with name and weight in kg

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It only describes the calculation and comparison but does not disclose side effects, required permissions, rate limits, or output format. Minimal behavioral info is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence (13 words) that is front-loaded and free of extraneous information. Every word contributes to the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one array parameter) and no output schema, the description could be more complete by explaining the output format (e.g., what the comparison returns). It is minimally adequate but leaves ambiguity about return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter 'items' with a description, and schema coverage is 100%. The description adds meaning by specifying that the tool compares to airline limits (carry-on, economy, business), providing context beyond the schema. This enhances understanding of the parameter's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate total luggage weight and compare to airline limits', specifying the resource (luggage weight) and scope (airline limits for carry-on, economy, and business). It distinguishes well from sibling tools, all of which are other 'calculate_*' tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention when not to use it. While the purpose is clear, no contextual cues or exclusion criteria are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_malus_ecologiqueBInspect

French ecological malus 2026: CO2 g/km based tax on new vehicle registration. Returns: {malus_eur, threshold, max}. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
co2_g_kmYesCO2 emissions in g/km

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It states it's a tax calculation but does not disclose whether it is read-only, the nature of its output, or any side effects. As a calculation tool, it's likely deterministic and safe, but this is not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence (16 words) that front-loads the topic. It is efficient, though it could include a bit more detail without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (one numeric parameter, no output schema, no annotations), the description provides the core purpose but omits the output format (likely a tax amount). It is adequate but leaves marginal ambiguity about the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'co2_g_km', which already describes its meaning. The tool description adds no additional value beyond what is in the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool calculates the French ecological malus for 2026 based on CO2 emissions (g/km) for new vehicle registration. It uses specific verbs and resources, and clearly distinguishes itself from numerous sibling calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for calculating the malus but does not specify when to use it versus other French tax calculation tools (e.g., calculate_french_income_tax). No alternatives or when-not conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_marathon_splitsBInspect

Generate target split times for a marathon, half-marathon, or other race. Use for race-day pacing. Inputs: target finish time, distance km. Returns 5K splits, halfway, and final pace. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
target_time_minutesYesTarget marathon finish time in minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the purpose but fails to mention output format, rounding behavior, pace units (km vs miles), or constraints (e.g., valid target time range). A mutation tool should disclose more behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 11 words, with no redundancy. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter, the description is adequate but lacks output details. Without an output schema, the description should at least hint at the return structure (e.g., list of splits). It is minimally complete but leaves room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter description is clear. The overall description adds context about the output (pace plans) but does not enhance the meaning of the parameter beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (generate), the resource (pacing plans), and the context (marathon target time). It distinguishes itself by mentioning 'even and negative-split' plans, which adds specificity beyond a generic pacing tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Given the large number of sibling calculate_ tools, this is a significant gap. There is no mention of prerequisites, when not to use, or alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_markup_marginCInspect

Convert between markup and margin (often confused). Use for pricing decisions or COGS reporting. Inputs: cost and either markup % or margin %. Returns selling price and the other metric. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
costYesCost price
selling_priceYesSelling price

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description does not disclose behavioral traits such as whether it returns both markup and margin, the formula used, or any side effects. This is insufficient for a calculator tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at five words and front-loaded. However, it could benefit from one more sentence to improve completeness without losing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should clarify the tool's return format (e.g., which values are calculated). The absence of this information, combined with minimal context, leaves the tool incomplete for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents both parameters. The description adds no additional meaning beyond what is in the schema, so baseline score 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Markup vs margin calculator' clearly states the tool calculates markup and margin, a specific verb-resource pair. It likely distinguishes it from related siblings like 'calculate_profit_margin', but does not explicitly differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'calculate_cost_price' or 'calculate_profit_margin'. The description lacks any context for appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_maternity_leave_frCInspect

Compute French maternity leave duration and IJSS allowance. Use for HR or expectant parents. Inputs: due date, prior children count, multiple birth. Returns pre/post-birth leave days and allowance estimate. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
twinsNoMultiple birth
existing_childrenYesExisting children

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states the purpose without mentioning whether the calculation is based on current French law, if it's deterministic, or what assumptions are made. This lack of detail could lead to misuse.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (four words) and to the point, but it lacks structure and could be more informative without becoming verbose. It is not wasteful, but it is under-specified for optimal agent understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema), the description should at least indicate the output format or the legal context. It fails to mention that the calculation follows French law or what the result represents (e.g., number of weeks). The description is incomplete for reliable agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its parameters (twins and existing_children). The description adds no additional meaning beyond what the schema already provides, so the baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'French maternity leave duration,' which is clear in indicating the tool calculates the length of maternity leave in France. However, it does not specify whether it returns weeks, days, or something else, and it barely adds value beyond the tool's name. Among siblings with similar 'calculate_' prefixes, it lacks differentiation details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool instead of alternatives. There is no indication of prerequisites, context of use (e.g., for French residents), or comparison to other tools that might calculate leave or benefits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_max_heart_rateBInspect

Estimate maximum heart rate using standard or age-adjusted formulas. Returns: {max_heart_rate}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesAge in years
formulaNoFormula: standard (220-age), tanaka (men), gulati (women)standard

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description uses 'estimate' indicating a read-only calculation, but with no annotations, it doesn't disclose any behavioral traits beyond that. For a simple calculation tool, this is acceptable but could mention that it is non-destructive and does not modify state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, very concise and front-loaded. It efficiently communicates the core purpose, though it could benefit from slight expansion for completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does not mention the return value (presumably a number). It adequately describes input and purpose, but lacks information about output format or any caveats, making it minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides descriptions for both parameters (age and formula) with 100% coverage. The description adds minimal extra meaning beyond stating that formulas are used, which is already implied by the parameter enums.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates maximum heart rate using formulas, with mention of 'standard or age-adjusted formulas' which aligns with the parameter options. However, it does not explicitly distinguish from the sibling tool 'calculate_heart_rate_zones', which is closely related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'calculate_heart_rate_zones' or other calculation tools. No when-to-use or when-not-to-use information is included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_meat_cookingDInspect

Compute meat cooking time and target internal temperature by cut and doneness. Use for kitchen prep. Inputs: meat type, weight kg, doneness. Returns oven time and target temp. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
meatYesMeat type
donenessNoDonenessmedium
weight_kgYesMeat weight kg

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It does not mention what the tool returns (time, temperature, both), any assumptions, or side effects. The description is insufficient for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short, but it is under-specified rather than concise. It fails to provide necessary details in a compact form, making it unhelpful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters with enums and no output schema, the description should explain the output and behavior. It does not mention whether the tool returns a cooking time, temperature, or a combination, leaving a significant gap in understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional meaning beyond the schema's basic field descriptions. A baseline of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Meat cooking time and temperature' is vague; it does not specify whether the tool calculates time, temperature, or both. It fails to distinguish itself from sibling tools like 'calculate_meat_cooking_time' or 'calculate_cooking_time'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of context, prerequisites, or exclusions, leaving the agent to infer usage from the vague description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_meat_cooking_timeCInspect

Compute oven cooking time for meat by cut, weight, and doneness. Use for cooking. Inputs: meat type, weight kg, target doneness. Returns time min and oven temp °C. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
donenessYesDesired doneness
meat_typeYesType of meat
weight_kgYesMeat weight kg

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits, but it only states the basic function. It does not reveal assumptions, limitations (e.g., standard oven only), or whether the calculation is based on a specific formula.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one sentence), but it lacks structure and omits necessary details. It is front-loaded but too minimal for adequate understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is incomplete. It does not specify the output format (e.g., minutes), assumptions (e.g., standard oven temperature), or any constraints (e.g., only certain meats).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all 3 parameters. The description's phrase 'by weight and desired doneness' adds minimal value beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'meat cooking time', and specifies inputs (weight and doneness). It is specific enough to distinguish from generic cooking time tools, though it could explicitly mention the output unit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_cooking_time' or 'calculate_meat_cooking'. No prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_menstrual_cycleBInspect

Calculate next period, fertile window, and ovulation date. Returns: {next_period, ovulation_date, fertile_window_start, fertile_window_end}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cycle_lengthNoAverage cycle length days
last_period_dateYesLast period start date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits. It does not state whether the tool has side effects, requires authentication, or is stateless. For a calculator, it is presumably read-only, but that is not confirmed in the description.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, consisting of a single sentence. While it lacks structure like bullet points, the brevity is appropriate for a simple tool. However, it could be slightly longer to include output format without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should provide information about the return format (e.g., dates, JSON structure). It lists three outputs but does not specify how they will be presented. The description also does not clarify the assumptions (e.g., average cycle) beyond what is in the schema. This leaves an agent uncertain about what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds no additional meaning beyond what the input schema already provides. It does not explain how the parameters relate to the outputs (e.g., that cycle_length modifies the calculation). The description does not elaborate on the output format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates three specific outputs: next period, fertile window, and ovulation date. The verb 'calculate' combined with the resources leaves no ambiguity about the tool's function. It distinguishes well from the many sibling calculate tools that focus on other domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs alternatives. There is no mention of prerequisites, contraindications, or related tools (e.g., due date calculators). The description simply states the tool's purpose without usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_mining_profitabilityBInspect

Compute crypto mining profitability after electricity costs. Use for miners evaluating ROI. Inputs: hashrate, power W, kWh price, network difficulty, coin price. Returns daily/monthly net profit. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
power_wattsYesMining hardware power consumption in watts
block_rewardNoBlock reward in coins (default 3.125 BTC post-halving)
hashrate_mhsYesMining hashrate in MH/s
coin_price_usdYesCurrent coin price in USD
network_difficultyYesCurrent network difficulty
electricity_cost_kwhYesElectricity cost per kWh in fiat currency

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the function without mentioning any assumptions, underlying formulas, or limitations. The agent lacks insight into what the calculation process involves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that is front-loaded with the essential action and resource. No unnecessary words, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 6 parameters and no output schema, the description is too brief. It does not specify what the output looks like, whether it returns a single number or a breakdown, or any assumptions about the inputs (e.g., using BTC as default coin).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters adequately. The description does not add any additional parameter insights beyond what is in the schema, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'cryptocurrency mining profitability', and specifies the output granularity ('daily and monthly'). This distinguishes it from sibling tools like 'calculate_crypto_profit_loss' which may cover trading profits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives such as 'calculate_crypto_profit_loss' or 'calculate_staking_rewards'. The description does not provide any context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_moon_phaseCInspect

Compute current moon phase and illumination % for any date. Use for astronomy, agriculture, fishing. Inputs: date. Returns phase name, illumination %, age in days. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYesDate in YYYY-MM-DD format

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and description does not disclose behavioral traits such as return format, whether it provides moon phase name or percentage, or any assumptions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no waste. Appropriate length for a simple tool, though slightly under-informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output format description; since no output schema, description should explain return value. Simple tool but incomplete guidance for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter already well-documented in schema. Description adds no additional meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'calculate' and resource 'moon phase' with context 'for a given date'. Specific and unambiguous, though does not distinguish from other calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. No context on prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_moroccan_cnssBInspect

Calculate Moroccan CNSS contributions (employee and employer shares). Returns: {gross_monthly_mad, employee, employer, pension_ceiling_mad}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gross_monthly_madYesGross monthly salary in MAD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as whether the tool is read-only, if it mutates data, or any required permissions. The description is minimal and lacks safety or side-effect details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that is appropriately front-loaded and concise. However, it could include a brief note on output without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has one well-defined parameter but no output schema. The description does not explain the return format (e.g., employee vs employer breakdown, monthly/annual), leaving agents to infer or test behavior for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a description for the only parameter 'gross_monthly_mad'. The description adds no extra meaning beyond the schema, so it meets the baseline expectation but does not enhance param understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Moroccan CNSS contributions for both employee and employer shares, which is specific and distinct from sibling tools like calculate_moroccan_income_tax or calculate_belgian_social_contributions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternative calculation tools, nor does it mention prerequisites or context (e.g., CNSS ceiling limits, minimum salary requirements).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_moroccan_income_taxAInspect

Calculate Moroccan income tax (IR) using DGI progressive brackets with family deductions. Returns: {annual_income_mad, taxable_income, income_tax_mad, effective_rate_pct, marginal_rate_pct, brackets}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
dependentsNoNumber of dependents (360 MAD deduction each, max 6)
annual_income_madYesAnnual gross income in Moroccan Dirhams (MAD)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions using 'DGI progressive brackets with family deductions' which gives some insight into the calculation method but does not disclose potential limitations (e.g., handling of zero income, maximum deduction cap, or whether it returns gross or net tax). The description is adequate but not fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and key methodology. No redundant or unnecessary words are present, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not mention what the tool returns (e.g., numerical tax amount, object with breakdown). For a calculator tool with no output schema, omitting return information leaves some ambiguity. However, given the simplicity of the tool and the presence of sibling calculators, the core function is likely inferred.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides descriptions for both parameters with 100% coverage. The description adds meaning by explaining that the calculation uses 'family deductions', which directly relates to the dependents parameter (360 MAD deduction each). This contextual information enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Moroccan income tax (IR) using DGI progressive brackets with family deductions, providing a specific verb, resource, and methodology. It distinguishes itself from numerous sibling tax calculators by explicitly mentioning 'Moroccan' and the specific tax type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for Moroccan income tax calculations but lacks explicit guidance on when to use this tool versus other tax calculators or tax-related tools. No prerequisites, conditions, or exclusions are mentioned, leaving the agent to infer the appropriate context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_moroccan_profit_foncierCInspect

Calculate Moroccan property income tax (profit foncier / revenus fonciers). Returns: {annual_rent_mad, taxable_income, income_tax_mad, effective_rate_pct, marginal_rate_pct}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
dependentsNoNumber of dependents for family deduction
expenses_pctNoDeductible expenses as % of rent (default 40%)
annual_rent_madYesAnnual rental income in MAD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It only says 'Calculate' without disclosing what the tool returns (e.g., total tax, breakdown) or any side effects. For a tax calculation tool, more behavioral context is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no unnecessary words. Front-loaded with key action and resource. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema provided, yet description does not indicate what the tool returns (e.g., tax amount in MAD) or any assumptions. For a complex tax calculation, this is insufficient for an agent to understand the tool's output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all three parameters with descriptions (100% coverage). The description adds no extra meaning beyond the schema, meeting baseline for a 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb and resource: 'Calculate Moroccan property income tax'. It distinguishes from sibling tools like 'calculate_moroccan_income_tax' and 'calculate_moroccan_vat', but could be more specific about the type of property income (e.g., rental income).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as 'calculate_moroccan_income_tax' or other property-related calculators. No conditions or exclusions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_moroccan_vatCInspect

Compute Moroccan VAT (TVA) — convert between HT and TTC. Use for invoicing in Morocco. Inputs: amount, rate (20/14/10/7), mode (ht/ttc). Returns HT, TTC, tax amount. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoInput mode: ht=hors taxe, ttc=toutes taxes comprisesht
rateNoVAT rate: 0%, 7%, 10%, 14%, or 20% (standard)20
amountYesAmount in MAD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It only states the calculation function but does not disclose any behavioral traits such as whether it returns VAT amount, total, or both, or any validation/error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 7 words, which is concise. It clearly conveys the core purpose without extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool and full schema coverage, the description is adequate for basic understanding. However, the lack of any mention of return format or behavior may leave an agent uncertain about the output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds no additional semantics beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and resource 'Moroccan VAT (TVA)' with a specification of rates. It distinguishes from many sibling tools by country, but does not explicitly differentiate from other country-specific VAT calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., calculate_vat_generic, calculate_french_vat). There is no mention of prerequisites or contexts where this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_mortgageBInspect

Calculate mortgage/loan monthly payment, total cost, and optional amortization schedule. Returns: {principal, months, monthly_payment, total_interest, total_cost}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsYesLoan duration in years
principalYesLoan amount in currency units
annual_rateYesAnnual interest rate in %
with_scheduleNoInclude first 12 months + last month amortization

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries the burden. It discloses that the tool computes monthly payment, total cost, and an optional amortization schedule. However, it does not detail boundary behavior (e.g., edge cases for interest rates) or that the amortization schedule only covers first 12 months + last month (as per schema). Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence of 12 words, efficiently conveying the tool's purpose. It is concise and front-loaded, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the input schema covers all parameters (100% description coverage) and no output schema exists, the description adequately informs the agent of the primary outputs (monthly payment, total cost, amortization schedule). It is complete enough for a straightforward financial calculation tool, though it could mention the limited amortization schedule scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are already well-documented. The description adds only a high-level summary ('mortgage/loan monthly payment, total cost, and optional amortization schedule') without extra semantic detail beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates monthly payment, total cost, and optional amortization schedule for a mortgage or loan. While specific and action-oriented, it does not differentiate from sibling tools like 'calculate_loan_payment' or 'calculate_us_mortgage', but the resource and outputs are distinct enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given many sibling tools for loans and mortgages, such as 'calculate_us_mortgage' or 'calculate_mortgage_insurance', the lack of usage context is a significant gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_mortgage_insuranceCInspect

Calculate mortgage insurance (assurance emprunteur) cost. Returns: {monthly_insurance, annual_insurance, total_insurance}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rate_pctNoAnnual insurance rate in % of loan (default 0.36)
loan_amountYesLoan amount in EUR
duration_yearsYesLoan duration in years

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description does not disclose any behavioral traits beyond the calculation action. It does not mention if the tool is read-only, what it returns, or any constraints (e.g., valid ranges beyond schema). The description adds no value over the schema and name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that immediately states the purpose. It is front-loaded and contains no unnecessary words. While it could be expanded slightly for context, it is efficient and avoids verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is minimal given the lack of output schema and annotations. It does not explain the output format (e.g., monthly cost, total cost), nor does it provide any background on the French mortgage insurance system. An agent may not know what to expect as a return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so each parameter has its own description. The tool description does not add any additional meaning or context about the parameters or their relationships. A score of 3 is appropriate as the schema already does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Calculate') and resource ('mortgage insurance cost'), including the French term 'assurance emprunteur'. It is specific and distinguishes the tool from siblings like 'calculate_mortgage' by focusing on insurance rather than the loan payment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For instance, it does not mention that 'calculate_mortgage' might already include insurance or when to prefer this dedicated tool. The description lacks any contextual cues for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_motor_torqueCInspect

Compute motor torque from power and RPM. T(Nm)=9550·P(kW)/RPM. Use for mechanical sizing. Inputs: power kW, rpm. Returns torque in N·m and lb-ft. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rpmYesRPM
power_wYesPower watts

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It fails to mention any assumptions, formula used, units of output, or constraints. This is a significant gap for a calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (5 words) and front-loaded. While it could benefit from slightly more detail, it is efficient for a simple calculation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and the description does not mention the return value or units. For a calculation tool, users need to know what the output represents. Incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptive names and descriptions for both parameters. The description adds no additional semantic value beyond the schema, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Motor torque from power and RPM' clearly states the specific verb-resource relationship and distinguishes it from other calculate_* siblings. It is precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No context on prerequisites or exclusions. The description simply states the calculation, leaving the agent to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_moving_cost_detailedBInspect

Estimate detailed moving cost based on volume, distance and floor. Returns: {base_cost, total_cost_eur, note}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
floorNoFloor number (default 0 = ground floor)
elevatorNoWhether elevator is available (default true)
volume_m3YesVolume of goods to move in m3
distance_kmYesMoving distance in km

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosure. It only states 'estimate detailed moving cost' but does not describe return format, units (currency?), whether cost includes taxes, or any side effects. The behavioral model is under-specified beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence (10 words) that communicates the core purpose without redundancy. However, it could be slightly expanded to include more guidance without losing conciseness. The structure is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations, output schema, and the presence of many sibling calculators (including 'calculate_moving_volume'), the description is insufficient. It does not specify the output format, currency, or how the cost is calculated. An agent cannot fully determine when and how to use this tool correctly based solely on this description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds contextual grouping by mentioning 'volume, distance and floor' but omits the 'elevator' parameter. It does not add significant meaning beyond what the schema already provides, but it highlights the most relevant parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Estimate'), the resource ('detailed moving cost'), and the key inputs ('volume, distance and floor'). This distinguishes it from the sibling tool 'calculate_moving_volume' which likely calculates volume, not cost. The purpose is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or comparisons to other tools. The description is purely declarative without usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_moving_volumeCInspect

Estimate moving volume (m³) by home type and contents. Use for moving company quotes. Inputs: home size, rooms, furniture density. Returns m³ and truck size recommendation. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesHome type

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits such as whether the tool is read-only, destructive, or requires authentication. The short description does not compensate for the missing metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded and contains no wasted words. It is appropriately sized for a simple tool, though slightly more context could be beneficial.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description fails to clarify what 'moving volume' means, the unit of output, or any assumptions. It lacks completeness for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter (enum values described as 'Home type'). The description adds minimal value by repeating 'by home type', but does not clarify format or units beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates moving volume based on home type, using a specific verb-resource combination. It distinguishes from most siblings by the unique 'moving volume' context, though it could be more precise about what 'moving volume' entails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor any when-not-to-use or prerequisites. It simply states the function without contextual placement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_net_worthBInspect

Calculate net worth and debt ratio from assets and liabilities. Returns: {net_worth_eur, debt_ratio_pct}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
assets_totalYesTotal assets EUR
liabilities_totalYesTotal liabilities EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must disclose behavioral traits. It does not explain how net worth and debt ratio are computed (e.g., net worth = assets - liabilities, debt ratio = liabilities/assets), the output format, or any potential side effects. This leaves ambiguity for the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. It is concise and front-loaded with the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no output schema, the description should explain what the debt ratio represents (e.g., ratio of liabilities to assets) and the unit of net worth. It is incomplete for an agent to fully understand the output without defaults.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters ('Total assets EUR', 'Total liabilities EUR'). The description adds context about calculated outputs but does not explicitly map parameters to formulas. Baseline 3 is appropriate as schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates 'net worth and debt ratio' from 'assets and liabilities', using specific verbs and resources. It distinguishes itself from siblings like calculate_debt_to_income or calculate_debt_capacity by naming unique outputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool over alternatives, such as other financial calculators. The description does not mention prerequisites, exclusions, or context for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_night_shift_payAInspect

Calculate night shift pay (21h-6h) with configurable premium percentage. Returns: {night_hourly_rate, total_pay, premium_earned}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
night_hoursYesNumber of night hours worked (21h-6h)
premium_pctNoNight shift premium percentage (default 25%)
base_hourly_rateYesNormal hourly rate in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It does not disclose the return format, whether the calculation is read-only, currency handling, or error behavior. The description adds configurable premium and time window but lacks essential behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, 10 words, with no unnecessary information. It is front-loaded with the key verb and resource, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with three parameters and no output schema, the description is minimally adequate but lacks details about the output (e.g., total pay or per hour). It is sufficient for a basic understanding but leaves gaps in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents all parameters. The description adds the concept of 'night shift pay' linking to the time window, but does not provide additional meaning beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and resource 'night shift pay', specifies the time window '21h-6h', and mentions the configurable premium percentage. It effectively distinguishes this tool from other pay calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for night shift pay between 21h-6h with a configurable premium, but does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_notary_feesBInspect

Calculate French notary fees (frais de notaire) for a real estate purchase. Returns: {price, droits_mutation, emoluments_notaire, frais_divers, total_frais, total_pct}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoProperty type: ancien (old) or neuf (new)ancien
priceYesPurchase price in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description only says 'Calculate'. It does not disclose side effects, idempotency, or any behavioral traits beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose. It is front-loaded and contains no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool, the description is adequate but lacks differentiation from a sibling tool and does not mention the 'type' parameter's importance. It meets minimal needs but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description does not need to add meaning. It does not elaborate on parameters like 'type' (ancien/neuf), but the schema already provides defaults and enums.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates French notary fees for real estate purchases. However, it does not differentiate from the sibling tool 'calculate_notary_fees_detailed', which likely provides a more detailed calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'calculate_notary_fees_detailed'. No context on prerequisites or typical use cases is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_notary_fees_detailedBInspect

Detailed breakdown of French notary fees by component (taxes, emoluments, debours). Use for property buyers in France. Inputs: property price, type (new/old), department. Returns total fees and per-component breakdown. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesProperty type
departmentNoFrench department code (optional, affects DMTO rate)
property_priceYesProperty price EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only says 'Estimate', which implies a read-only operation, but does not explicitly state idempotency, safety, or what happens (e.g., no side effects, no writes).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. It's appropriately brief, though could benefit from slightly more detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should clarify what the breakdown includes (e.g., taxes, fees, percentages). It does not, leaving the agent to infer the return structure. The tool is detailed, but the description is too minimal for proper context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema. The schema already provides descriptions for all parameters, including the effect of department on DMTO rate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Estimate') and resource ('French notary fees breakdown'), clearly indicating it provides a detailed estimate for property purchase. It distinguishes itself from the simpler sibling 'calculate_notary_fees' by specifying 'breakdown'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this detailed version versus the simpler 'calculate_notary_fees' or other tools. No prerequisites or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_number_base_convertCInspect

Convert a number between bases 2 (binary), 8 (octal), 10 (decimal), and 16 (hex). Use for programming. Inputs: value, from-base, to-base. Returns converted number. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesNumber to convert as string
to_baseYesTarget base
from_baseYesSource base

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so description carries full burden. It only states conversion between bases, no disclosure of behavior for negative numbers, fractions, large numbers, or error handling. Input schema already defines parameters, so description adds no extra behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, very concise and front-loaded. However, it could structure the bases more clearly (e.g., 'between bases 2, 8, 10, and 16').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, and description lacks return value format, error cases, or limitations. For a tool with three required parameters, it is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters. The description does not add new meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (convert), resource (a number), and the specific bases (2, 8, 10, 16). It distinguishes from other conversion tools, but could be more explicit about the limitation to only these four bases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives, such as the sibling 'calculate_base_converter'. No context on prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ohms_lawAInspect

Compute Ohm's law: V=I·R. Solve for any of V, I, R given the other two. Use for electronics. Inputs: any 2 of (V volts, I amps, R ohms). Returns the third. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
currentNoAmps
voltageNoVolts
resistanceNoOhms

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description states it solves for the missing value, which implies a mathematical operation with no side effects. However, no annotations are provided, and the description does not detail behavior like input validation or handling of multiple provided parameters. With zero annotations, the description carries the burden but is slightly lacking in detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, front-loaded with the formula, no superfluous words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator tool with 100% schema coverage and no output schema, the description is adequate. It explains the purpose and the formula. Could optionally mention return value, but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all three parameters with descriptions (Amps, Volts, Ohms) giving 100% coverage. Description adds the relationship (V=IR) and implies only two of three need to be provided, which goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates Ohm's law (V=IR, P=VI) and solves for a missing value. This is a specific verb+resource that distinguishes it from the many other calculate_* tools on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It's clear that the tool is used for solving Ohm's law problems, implying you use it when you need to find one of current, voltage, or resistance given the other two. No explicit alternatives or when-not-to-use, but context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_one_rep_maxCInspect

Estimate 1 repetition maximum from submaximal lift using Epley, Brzycki and Lombardi formulas. Returns: {epley_1rm, brzycki_1rm, lombardi_1rm, average_1rm}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
repsYesNumber of repetitions performed
weight_liftedYesWeight lifted in kg or lbs

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It does not disclose behavioral traits like output format (e.g., single value or multiple formula results), rounding, precision, or assumptions (e.g., reps must be ≤ 10 for formula validity). The transparency is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It efficiently communicates the core action and specifics.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema or annotations, the description is too brief. It does not explain what the tool returns (e.g., an object with formula results), any units consistency, or limitations. For a tool that likely produces structured output, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions (min/max for reps, min for weight). The description does not add any extra meaning beyond the schema, meeting the baseline for a simple tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates 1 repetition maximum using three named formulas (Epley, Brzycki, Lombardi), which is specific and informative. However, it does not explicitly distinguish from the sibling tool 'calculate_1rm_table', which might perform a similar calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives or any conditions/limitations. The description only states what it does, leaving the agent with no context for choosing it over other calculation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_overtime_frCInspect

Compute French overtime pay (heures supplémentaires) per labor code. Use for HR or employee verification. Inputs: hourly rate, normal hours, overtime hours. Returns gross overtime pay with 25%/50% premiums. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
base_hoursNoBase weekly hours
hourly_rateYesHourly rate EUR
actual_hoursYesActual weekly hours

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must convey behavioral traits. It does not disclose that it follows French labor law (35h/week), overtime multipliers, or output format. The minimal description provides no insight into how the calculation is performed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (5 words), but it lacks necessary detail. While no extraneous text, it under-specifies the tool's behavior. Conciseness is positive but impactful information is missing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters and no output schema or annotations, the description is incomplete. It does not explain the return value, the formula used (e.g., different overtime rates), or any constraints (e.g., only for France). More detail is needed for the agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and all parameters have descriptions (actual_hours, hourly_rate, base_hours). The description adds no additional meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'French overtime pay calculation' states the general purpose (calculate overtime pay for France), but does not differentiate from the similar sibling 'calculate_overtime_pay_fr' which could be for a different country or specific variant. The purpose is clear but ambiguous due to lack of distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like 'calculate_overtime_pay_fr' or other calculate tools. No prerequisites or conditions are mentioned, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_overtime_pay_frAInspect

Calculate French overtime pay: first 8h at +25%, beyond 8h at +50% (weekly threshold 35h). Returns: {hours_at_25pct, hours_at_50pct, pay_25pct_zone, pay_50pct_zone, total_overtime_pay, extra_vs_normal}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
overtime_hoursYesTotal overtime hours worked beyond 35h/week
base_hourly_rateYesNormal hourly rate in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It correctly indicates this is a calculation tool with no side effects. However, it does not disclose assumptions or limitations (e.g., standard French labor law, no special exceptions). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the action and includes all critical details without redundancy. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with two parameters and no output schema, the description is complete. It clearly defines the rates and threshold. Minor omission: could explicitly state it follows standard French labor law, but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers both parameters with descriptions (100% coverage). The description adds value by explaining how overtime hours are split into first 8 and beyond with corresponding rates, providing context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates French overtime pay, specifies the exact rates (+25% for first 8 hours, +50% beyond) and the weekly threshold (35 hours). The name aligns, and it distinguishes from sibling tools like calculate_overtime_fr by providing specific rate details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives or any exclusions. The description only states what it does, leaving the agent to infer context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ovulationCInspect

Calculate ovulation date and fertile window from last period and cycle length. Returns: {lmp, ovulation_date, fertile_window_start, fertile_window_end, next_period}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cycle_lengthNoMenstrual cycle length in days
last_period_dateYesYYYY-MM-DD — First day of last menstrual period

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It does not disclose behavioral traits such as whether the tool is read-only, requires authentication, or has side effects. A simple calculator likely has no side effects, but this is not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the verb and resource, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks information about the tool's output. Without an output schema, users are left to guess what format the ovulation date and fertile window come in. For a tool with no annotations and no output schema, the description should compensate with more detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description echoes parameter names without adding significant new meaning beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates ovulation date and fertile window from last period date and cycle length. It uses a specific verb and resource, but does not explicitly differentiate from sibling tools like calculate_menstrual_cycle or calculate_due_date.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not provide context, exclusions, or alternative tool suggestions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_paint_neededCInspect

Compute paint quantity for walls including coats and waste margin. Use for renovation budget. Inputs: room dimensions, coats, openings. Returns paint liters and recommended buying. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
coatsNoNumber of coats
area_m2YesWall area m²
coverageNoCoverage m²/liter

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description fails to disclose any behavioral traits. The tool's behavior (e.g., returns quantity in liters) is not mentioned. The description is insufficient for an agent to understand side effects or requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (5 words), which is concise, but it lacks structure and omits important details. Excessive brevity reduces usefulness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of output schema and annotations, the description should clarify what the tool returns (e.g., liters of paint). It does not mention the output unit or the formula used. Incomplete for a simple calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Parameter schema coverage is 100% (all three parameters have descriptions). The description adds no additional meaning beyond the schema, which already covers the parameters. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Paint quantity for walls' states the domain (paint quantity for walls) but does not specify the verb (e.g., calculate) nor distinguish it from siblings like 'calculate_paint_quantity' or 'calculate_wallpaper'. The purpose is moderately clear but lacks differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not indicate when to use this tool over alternatives, such as 'calculate_paint_quantity' or other related tools. Lack of context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_paint_quantityCInspect

Compute paint liters needed for a surface with chosen number of coats. Use for painting projects. Inputs: surface m², coats, paint coverage m²/L. Returns liters and number of cans. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
coatsNoCoats
area_m2YesArea in m²
coverageNom²/liter

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description is minimal. It does not disclose what the tool returns (e.g., total liters, cans), nor any constraints like rounding or edge cases. The behavioral burden is unmet.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence. No wasted words, but could add more context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fails to specify return format, result units, or output interpretation. With no output schema, the description should compensate but does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. Description does not add significant meaning beyond the schema; baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb+resource: 'Calculate paint needed for a surface'. However, there is a sibling tool 'calculate_paint_needed' with a similar name, and no differentiation is provided.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_paint_needed'. No when-to-use or when-not-to-use conditions are specified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_paper_size_convertBInspect

Get dimensions (mm, in) of standard paper formats (A0-A10, B0-B10, US Letter, Legal, Tabloid). Use for printing. Inputs: format name. Returns dimensions in mm and inches. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatYesPaper format name

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states 'Get dimensions' without disclosing whether the operation is read-only, what units are used, or any side effects. The behavioral traits are largely absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundancy. It is appropriately sized for a simple tool, though it could be slightly expanded to include output details without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter and no output schema, the description is minimally complete. However, it lacks details about the output format (e.g., units like mm or inches) and does not clarify the 'convert' aspect in the name, leaving some ambiguity about the tool's full behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the 'format' parameter has an enum and description). The tool description does not add any additional meaning beyond the schema. Baseline score of 3 is appropriate since the schema already documents the parameter adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get dimensions of standard paper formats', using a specific verb ('Get') and resource ('dimensions of standard paper formats'). It is distinct from sibling tools, many of which are calculators or unrelated conversions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No mention of prerequisites, when not to use, or related tools. The description is purely functional without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_parcoursup_pointsCInspect

Estimate Parcoursup admission score from bac + lycée grades. Use for French university candidates. Inputs: grades and coefficients per subject. Returns estimated score. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bac_averageYesExpected/actual bac average (/20)
option_bonusNoBonus points from options

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fails to disclose behavioral traits such as calculation method, data dependencies, or reliability. It merely states 'estimate' without elaboration.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single brief sentence, which is concise but lacks structure or additional detail. It could be more informative without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two fully described parameters and no output schema, the description provides the core purpose but is missing context about accuracy, assumptions, or use cases. It is minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so baseline is 3. The description adds no extra parameter-specific meaning beyond what the schema already provides (bac_average range and option_bonus default).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'estimate' and the specific resource 'Parcoursup admission score'. However, it does not differentiate from the sibling tool 'calculate_parcoursup_score', which may have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no explicit context or exclusions. The description implies it's for estimating Parcoursup scores but offers no additional usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_parcoursup_scoreBInspect

Estimate Parcoursup weighted score from French baccalaureate component grades. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
grand_oral_noteYesGrand Oral examination grade out of 20
bac_general_averageYesGeneral baccalauréat average out of 20
specialite_1_averageYesFirst speciality subject average out of 20
specialite_2_averageYesSecond speciality subject average out of 20
controle_continu_averageYesContinuous assessment (contrôle continu) average out of 20

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as side effects, permissions, or rate limits. As a calculator, it is likely read-only, but this is not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words, perfectly concise and front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool is a simple calculation with well-documented parameters, the description is adequate. It does not specify return format, but this is not critical for a calculator.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no additional meaning beyond what the schema already provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it estimates a Parcoursup weighted score from baccalaureate component grades. However, it does not differentiate from the sibling tool 'calculate_parcoursup_points', which may cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_parcoursup_points'. No context on prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_part_timeBInspect

Calculate part-time work percentage and optional pro-rata salary. Returns: {percentage, prorata_salary}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
full_salaryNoFull-time salary to pro-rate (optional)
full_time_hoursNoFull-time weekly hours (FR default 35h)
part_time_hoursYesPart-time weekly hours

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states the calculation goal. It does not disclose that the operation is read-only (no side effects), how results are returned (e.g., numerical values, object), or any assumptions (e.g., using default full_time_hours of 35).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the core purpose. It is efficient with no wasted words, though it could be slightly more structured to include return format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should clarify what the tool returns (e.g., objects with percentage and salary). It also lacks usage context, making it incomplete for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters. The description adds 'pro-rata salary' context but does not explain parameter-specific nuances beyond the schema, maintaining a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates 'part-time work percentage' and 'optional pro-rata salary', specifying a unique combination of outputs. It distinguishes itself from siblings by focusing on part-time work metrics, a specific niche among many calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs. other salary or percentage calculators (e.g., calculate_belgian_salary, calculate_percentage). There is no mention of prerequisites, limitations, or alternative use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_passport_validityCInspect

Check if passport is valid for travel (6-month rule). Returns: {note}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
expiry_dateYesPassport expiry date YYYY-MM-DD
travel_dateYesPlanned travel date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavioral traits. It only states a basic algorithm check, but does not disclose if the tool is read-only, requires any permissions, or what the return value looks like. Critical behavioral context is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise but lacks structure. It front-loads the purpose, but it could include additional important details without becoming verbose. It is adequate but not optimally structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the schema richness, the description is partially complete. It explains the basic rule but omits the output format or return type. For a tool without output schema, this is a gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides complete coverage (100%) for both parameters with type and format descriptions. The description does not add any extra meaning beyond the schema. Baseline 3 is appropriate since the schema already documents parameters adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking passport validity for travel using the 6-month rule. It uses a specific verb ('Check') and resource ('passport'), and distinguishes itself from sibling tools by focusing on passport validity, which is unique among the many calculate tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. It does not mention prerequisites, context, or situations where it should not be used. The description is too brief to offer any usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pasta_portionsBInspect

Calculate dry pasta, water and salt for a given number of people. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
appetiteYesAppetite level
num_peopleYesNumber of people
pasta_typeYesPasta shape

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It only states what is calculated but omits how (e.g., portion sizes per pasta type, appetite adjustments), output format, side effects, or assumptions. The agent lacks information about the tool's behavior beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no redundant words. Every word contributes to the purpose. Highly concise for the information conveyed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high schema coverage and no output schema, the description is minimally adequate. It explains the tool's function but lacks detail on output format, constraints (e.g., max people), or edge cases. For a simple tool, it is sufficient but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%; each parameter has a basic description (e.g., 'Number of people', 'Pasta shape', 'Appetite level'). The description adds value by linking parameters to the overall calculation but does not enrich understanding of how each parameter influences the result. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates dry pasta, water, and salt quantities for a given number of people. It specifies the verb 'calculate', the resources (dry pasta, water, salt), and the scope (for a given number of people), distinguishing it from other calculation tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or comparison to sibling tools like calculate_bread_hydration or calculate_cooking_time. An AI agent would not know when this tool is appropriate over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pendulum_periodCInspect

Compute simple pendulum period T=2π√(L/g). Use for physics homework or clock design. Inputs: length m, gravity m/s² (default 9.81). Returns period in seconds. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gravityNoGravity m/s²
length_mYesPendulum length meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description does not disclose assumptions (e.g., small-angle approximation), limitations, or behavior beyond the basic formula.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief but under-specified. Lacks structure and key details, making it insufficient for effective selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and simple parameters, description should at least hint at the formula or result. Missing context reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with descriptions. Description adds no additional meaning beyond the schema; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description 'Simple pendulum period' indicates the tool calculates the period of a simple pendulum. It is clear but minimal, and does not differentiate from many other calculate_ tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternative calculate_ tools. Lacks context for prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_percentageBInspect

Calculate percentages: value of total, percentage change, what percent. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesFirst value
bYesSecond value
operationYesof: X% of Y; change: from A to B; what_pct: X is what % of Y

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only lists operations but says nothing about side effects, return format, error handling, precision, or edge cases. For a calculation tool, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that clearly conveys the core functionality without any unnecessary words or repetition. It is well-structured and front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks any mention of return values or output format, despite no output schema. For a simple calculator, it might suffice, but additional details on result structure or error messages would improve completeness. The high schema coverage partially compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all parameters (a, b, operation) and enum values. The description adds minimal context by restating the operations, but does not explain parameter constraints or usage beyond what the schema provides, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists three distinct percentage operations (value of total, percentage change, what percent) which gives a clear purpose. However, it does not differentiate from the sibling 'calculate_percentage_change' tool, which likely covers one of these operations, causing potential ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus any of the many sibling tools (e.g., 'calculate_percentage_change', 'calculate_discount'). The description does not specify context or exclusions, leaving the agent without decision support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_percentage_changeBInspect

Compute % change between two values, signed (increase or decrease). Use for performance comparisons, statistics. Inputs: old value, new value. Returns absolute and relative change. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
new_valueYesNew value
old_valueYesOriginal value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. It does not mention the formula used (e.g., ((new - old) / old) * 100), edge cases like division by zero, or how negative values are handled. This is insufficient for a tool that could produce division-by-zero errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no unnecessary words. It is front-loaded and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and lack of output schema, the description is adequate for basic understanding. However, it misses important contextual details such as the formula and how to handle zero or negative inputs, which are common in percentage change calculations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions 'New value' and 'Original value', which are self-explanatory. The description adds no extra semantic meaning beyond what the schema already provides, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate percentage change between two values' clearly states the verb (calculate) and resource (percentage change). It differentiates from sibling tools like 'calculate_percentage' or 'calculate_discount' by specifying 'between two values', but does not explicitly distinguish from other similar tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, limitations, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_percentile_rankCInspect

Compute the percentile rank of a value within a dataset. Use for benchmarking scores or salaries. Inputs: value, dataset (list of numbers). Returns percentile (0-100). See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesValue to rank
total_valuesYesTotal number of values
values_belowYesNumber of values below

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description fails to disclose how the percentile rank is computed (e.g., formula: (values_below / total_values) * 100). The agent cannot infer the exact behavior or edge-case handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short phrase that is too brief to be useful. It sacrifices necessary detail for brevity, making it under-specified for a three-parameter calculation tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what the return value represents (e.g., a percentage or rank) and any assumptions. It fails to do so, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All three parameters have descriptions in the schema covering 100% of properties. The tool description adds no additional semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Percentile rank of a value' essentially restates the tool name without adding specificity. It does not clarify the formula or distinguish from sibling tools like calculate_z_score or calculate_percentage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Among many statistical calculation tools, there is no mention of appropriate scenarios or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_perimeterCInspect

Calculate perimeter/circumference for common shapes. Returns: {shape, perimeter}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sideNoSide for square/hexagon
shapeYesShape
widthNoWidth/side b
lengthNoLength/side a
radiusNoRadius
side_cNoSide c for triangle
semi_majorNoSemi-major for ellipse
semi_minorNoSemi-minor for ellipse

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits (e.g., return format, idempotency, validation). It only states the operation without any such details, leaving the agent uninformed about safety or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise and front-loaded, but no additional structure is needed. It efficiently conveys the core purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool, the description is adequate but lacks output semantics (no output schema) and does not elaborate on units or error handling. Given full schema coverage, it is minimally complete but could be improved.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents each parameter clearly. The description adds no extra meaning beyond the schema, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses a specific verb ('Calculate') and resource ('perimeter/circumference') and distinguishes from sibling tools like calculate_area or calculate_volume. However, it omits the list of shapes, which is only available in the input schema enum, so clarity is slightly diminished.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternative geometry tools (e.g., calculate_area, calculate_volume). There is no mention of prerequisites, context, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pet_ageCInspect

Convert pet age (dog/cat) to human-equivalent years. Use for pet health monitoring. Inputs: animal type, age years, breed size. Returns human-equivalent age. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeNo
animalYes
age_yearsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description fails to disclose behavioral traits like the conversion formula, the role of the 'size' parameter, or any limitations. The brevity leaves significant gaps in understanding tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at one sentence, but this conciseness comes at the expense of useful information. It is not well-structured to convey necessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the three parameters and lack of output schema, the description is incomplete. It does not explain the purpose of the optional 'size' parameter or how the conversion works, leaving agents with insufficient context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description does not compensate by explaining the three parameters (animal, age_years, size). It offers no insight into parameter meanings or usage, such as that size applies to dogs only.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb 'convert' and resource 'pet age to human equivalent years', clearly indicating the tool's function. However, it does not differentiate from sibling tools like calculate_cat_age and calculate_dog_age, which may have similar purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as the separate cat and dog age calculators. It lacks context for appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pet_bmiBInspect

Estimate body condition score proxy (BMI) for dogs and cats. Returns: {thresholds}. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
animalYes
weight_kgYes
body_length_cmYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description must carry the burden. Only 'estimate' indicates a read-only calculation, but it does not disclose the return format, precision, or any side effects. More detail is needed for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy, directly stating the tool's purpose. Front-loaded with the verb 'Estimate'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and the description does not mention the return value. For a calculation tool, the agent needs to know what the tool returns and possibly how the BMI is computed or interpreted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain parameter meaning beyond names (e.g., units for weight and body length, or why body_length is required). The agent must rely solely on parameter names, which is insufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates a body condition score proxy (BMI) for dogs and cats, distinguishing it from human BMI tools (e.g., calculate_bmi) and other pet-related calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. Usage is implied by the name and description, but there are no exclusions or pointers to sibling tools like calculate_bmi for humans.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pet_food_portionCInspect

Compute daily food portion (g) for dogs and cats by weight, age, activity. Use for pet feeding. Inputs: animal type, weight, activity, life stage. Returns grams/day and meal split. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
activityYesActivity level
pet_typeYesType of pet
age_yearsYesPet age years
weight_kgYesPet weight kg

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must disclose all behavioral traits. It only states the purpose without mentioning the calculation formula, assumptions, or any side effects. The tool's read-only nature is assumed but not confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words. It is efficiently front-loaded, though it could benefit from slightly more detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks an output specification (e.g., unit or format) and does not explain how the calculation works or why this tool exists alongside more specific siblings. This is insufficient for a calculation tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter having a short description (e.g., 'Activity level'), so the description adds no additional meaning beyond that. Baseline 3 is appropriate as the schema already does the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates daily food portion for dogs and cats. It uses a specific verb+resource combination, but does not explicitly differentiate from similar siblings like calculate_dog_food and calculate_cat_food.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus the separate dog/cat food calculators or other siblings. Without context, the agent cannot determine why this unified tool exists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pet_medication_doseCInspect

Compute veterinary medication dose by pet weight (mg/kg). Use for medication administration. Inputs: weight kg, dose mg/kg. Returns total mg and tablet count. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
weight_kgYes
dose_mg_per_kgYes
concentration_mg_per_mlNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits like required input ranges, unit assumptions, or what happens when concentration is omitted. The agent must infer behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but lacks detail. It is not verbose but also not sufficiently informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 3 parameters (2 required) and no output schema, the description fails to explain return values, calculation details, or how optional parameters affect results. It is incomplete for a medication dose calculator.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description adds no meaning beyond the parameter names. It does not explain the purpose of weight_kg, dose_mg_per_kg, or concentration_mg_per_ml.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Calculate veterinary medication dose by weight', specifying verb and resource. It is clear in purpose but does not differentiate from sibling tools like calculate_pet_bmi or calculate_pet_food_portion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, such as calculate_pet_food_portion. The description provides no context about prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pet_vaccination_scheduleCInspect

Generate upcoming vaccination schedule for a puppy or kitten. Use for pet care planning. Inputs: pet type, birth date, last vaccine date. Returns upcoming dates and vaccines. See list_bundles for related 'animaux' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
pet_typeYesType of pet
birth_dateYesPet birth date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states 'Generate upcoming vaccination schedule', which implies a read-only computation, but provides no detail on side effects, assumptions (e.g., typical vaccine intervals), or what happens for invalid dates. Critical behavioral context is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, which is concise but lacks structure. It front-loads the purpose, but does not earn its place by providing additional value such as output format or example usage. More sentences could improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and only 2 parameters, but the description fails to explain what the output looks like (e.g., list of dates, vaccine names). For a schedule generator, information about the output format is essential for the agent to use the result correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers both parameters at 100%, so the schema already documents pet_type (enum) and birth_date (format). The description adds no extra meaning or context, such as how these inputs are used to compute the schedule, resulting in baseline value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'Generate' and the resource 'upcoming vaccination schedule for a pet', which is clear and distinct from sibling tools that perform mathematical calculations. However, it lacks detail on what the schedule includes (e.g., all core vaccines or just next dose), so it is not maximally specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not indicate when to use this tool vs alternatives (e.g., for cat vs dog, or when a general schedule is needed). There is no mention of prerequisites or situations to avoid, leaving the agent without decision-making support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_phCInspect

Compute pH from H+ concentration or vice versa. Use for chemistry or aquarium care. Formula: pH=-log10[H+]. Inputs: pH or [H+] mol/L. Returns the missing value and acidity class. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ph_valueNopH
h_concentrationNoH+ mol/L

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. It only states the conversion direction but omits crucial details like what happens with multiple inputs, output format (value and unit), range limits, or error handling. The tool appears to return nothing per description.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at one sentence. It is front-loaded and wastes no words. However, it could be slightly clearer by including 'concentration' for H+.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and the need for the description to explain return values, the description is incomplete. It does not specify what the tool returns (e.g., the calculated pH or H+ concentration) or any constraints. For a simple calculator, it is minimally viable but lacks context for reliable agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions ('pH', 'H+ mol/L'), so the schema already defines their meaning. The description adds no additional semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates pH from H+ or vice versa, specifying the verb and resource. However, it does not explicitly mention 'conversion' or differentiate from other calculate tools, but the bidirectional nature is clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Among many sibling calculators, there is no context about prerequisites, input constraints, or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pinel_tax_reductionBInspect

Compute French Pinel rental investment tax reduction (rates 2026). Use to evaluate Pinel real estate investment savings. Inputs: investment amount, duration (6/9/12y). Returns total tax reduction and yearly amount. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
durationYesRental commitment duration in years: 6, 9 or 12
investmentYesInvestment amount in EUR (max 300,000)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It only states a calculation with 2026 rates but does not mention return format, side effects, or read-only nature. For a tax calculation tool, this is insufficient transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 8 words, front-loading the essential information. It is concise, though it could potentially include a bit more context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has low complexity with 2 simple parameters and no output schema. The description is minimal but sufficient for a straightforward calculation. However, it lacks details about the formula, assumptions, or whether the result is in euros, leaving some gaps for a tax tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters having descriptions in the schema. The description does not add any additional meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as calculating a specific French tax (Pinel) for a specific year (2026), with a clear verb 'Calculate' and resource 'French Pinel tax reduction'. It is easily distinguishable from the many sibling calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, no prerequisites mentioned (e.g., eligibility for Pinel), and no context about the tax regime. The description lacks any when-to-use or when-not-to-use information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pipe_diameterBInspect

Calculate the minimum pipe diameter required for a given flow rate and maximum velocity. See list_bundles for related 'plomberie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
flow_rate_lpmYesRequired flow rate in liters per minute
max_velocity_msNoMaximum water velocity in m/s (default 1.5 m/s per DTU norms)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as read-only, idempotency, authentication requirements, or side effects. For a calculation tool, it is likely safe, but the description should explicitly convey this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the purpose and essential parameters. It is efficient and free of extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature of the tool with only two parameters, the description is mostly complete. However, it lacks information about the output unit (e.g., mm or m) and does not mention any return format, which reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% coverage with descriptions for both parameters. The tool description echoes the parameter names but adds no additional meaning or constraints beyond what the schema already states. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action (calculate minimum pipe diameter) and specifies the required inputs (flow rate and maximum velocity). It is a specific verb+resource combination that distinguishes it from the many other calculate tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor are there any prerequisites or constraints mentioned. The description merely states the function without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pipe_flow_rateBInspect

Calculate water flow rate through a pipe using the Hazen-Williams formula. Returns: {C_coefficient}. See list_bundles for related 'plomberie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
length_mYesPipe length in meters
materialYesPipe material (affects Hazen-Williams C coefficient)
diameter_mmYesPipe internal diameter in millimeters
pressure_barNoAvailable water pressure in bar (default 3 bar)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose key behavioral traits such as return value units (e.g., L/min or m³/s), formula assumptions, or restrictions (e.g., water only). The mention of Hazen-Williams formula implies some context but falls short of full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that efficiently conveys the tool's purpose without extraneous words. It is well-suited for quick scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, no annotations, and four parameters, the description is insufficiently complete. It does not explain the output format, formula limitations, or how this tool fits with related tools (e.g., calculate_pipe_diameter). The default pressure value is mentioned in the schema but not reinforced in the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear parameter descriptions (e.g., 'Pipe internal diameter in millimeters', 'Available water pressure in bar (default 3 bar)'). The description repeats the formula name but adds no additional meaning beyond the schema, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the verb 'Calculate', the resource 'water flow rate through a pipe', and the method 'Hazen-Williams formula'. It effectively distinguishes itself from many sibling tools that start with 'calculate_' by providing specific technical context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'calculate_pipe_diameter' or 'calculate_water_pressure_loss'. There is no mention of prerequisites, limitations, or suitable scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_planet_weightBInspect

Compute your weight on other planets using gravity ratios. Use for fun, education, sci-fi. Inputs: weight on Earth (kg). Returns weight on each planet of the solar system. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
planetYesTarget planet
earth_weight_kgYesWeight on Earth in kg

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral traits. It does not mention return type, assumptions (e.g., using surface gravity), or potential errors. The description only states the function without any behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, short sentence that efficiently communicates the purpose. It could be slightly expanded without harming conciseness, but as is, it wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema), the description is adequate but not complete. It does not mention that the result is a weight value or any constraints, but the schema provides the necessary parameter details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no additional parameter-level meaning beyond what the schema already provides (earth_weight_kg described as 'Weight on Earth in kg', planet as 'Target planet' with enum values).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('calculate') and the resource ('your weight on other planets'), which is specific and distinguishes it from sibling tools. No other tool calculates planetary weight, so purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternatives. However, the tool's purpose is obvious, so it is minimally adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_plasterCInspect

Calculate plaster volume and weight for a given surface and thickness. Returns: {area_m2, volume_m3, weight_kg, bags_25kg}. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
area_m2YesSurface area in m²
thickness_mmNoThickness in mm (default 13)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility for behavioral disclosure. It states that the tool calculates volume and weight, but it does not specify the units of output (e.g., m³, kg), whether the calculation is based on standard plaster density, or if there are any side effects. This lack of detail leaves the agent uncertain about the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence. It contains no unnecessary words and efficiently communicates the tool's purpose. It is well-suited for quick parsing by an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description should compensate by specifying what the tool returns (e.g., volume in m³ and weight in kg with standard plaster density). It does not provide output units or format, which is necessary for the agent to use the results effectively. The description is incomplete for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are described in the schema (area_m2 with 'Surface area in m²', thickness_mm with 'Thickness in mm (default 13)'), giving 100% schema coverage. The description adds 'volume and weight' which aligns with the parameters but does not provide additional semantic meaning beyond the schema. A baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates plaster volume and weight from surface and thickness. The verb 'calculate' and resource 'plaster' are specific, distinguishing it from many sibling tools that calculate other quantities. However, it does not explicitly differentiate from other construction-related calculations like concrete mix.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives is provided. The description lacks context on prerequisites, typical use cases, or situations where another tool would be more appropriate. For example, it does not mention that this tool is for plastering walls or ceilings, nor does it compare to similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_poker_hand_probabilityBInspect

Compute probability of common poker hands (straight, flush, full house, etc.) given a starting hand. Use for poker strategy. Inputs: hole cards, community cards. Returns probability per hand category. See list_bundles for related 'jeux-probabilites' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
hand_typeYesPoker hand type to calculate probability for

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds minimal behavioral context beyond the name and schema. It states 'exact' but does not explain the return structure or any assumptions. Without annotations, the description carries the full burden, but the tool is simple enough that this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that clearly communicates the tool's purpose with no extraneous information. It is front-loaded with the essential verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's function and input, but given the lack of output schema, it would benefit from mentioning the output format (e.g., probability as a fraction or decimal). The tool is straightforward, so the description is mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage for the single parameter, so the baseline is 3. The description does not add additional meaning beyond what the schema provides, such as examples or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates exact probability and odds for any 5-card poker hand, which distinguishes it from other probability calculators on the server. However, it does not specify the output format (e.g., decimal, fraction), slightly limiting clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like calculate_dice_probability or calculate_lottery_odds. The description does not mention prerequisites, limitations, or when this tool is specifically appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pool_chlorineCInspect

Compute chlorine dosage (g) for pool maintenance based on volume and target ppm. Use for pool care. Inputs: pool volume m³, target chlorine ppm, current ppm. Returns chlorine grams. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
target_ppmNoTarget chlorine ppm
current_ppmNoCurrent chlorine ppm
volume_litersYesPool volume liters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behaviors. It fails to mention return format, calculation approach, or any limitations. The agent cannot assess safety or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise (four words) but lacks sentence structure. It is under-specified rather than efficiently informative. While not verbose, it sacrifices clarity for brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and minimal description. The tool performs a calculation but does not explain what the output represents (e.g., dosage amount or ppm result). The description is insufficient for an agent to understand the full functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so each parameter has a basic description. The tool description adds no additional meaning beyond what the schema provides, meriting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Pool chlorine dosage' hints at the purpose but lacks a verb or specific resource. It is minimally clear but does not differentiate from siblings like 'calculate_pool_volume' or 'calculate_aquarium_volume'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. There are no exclusions, prerequisites, or usage tips, leaving the agent to infer context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pool_volumeDInspect

Compute swimming pool water volume in m³ and liters. Use for pool maintenance dosing. Inputs: shape, dimensions. Returns volume m³ and L. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
shapeYesShape
depth_mYesAvg depth m
width_mNoWidth m
length_mNoLength m
diameter_mNoDiameter (round)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full responsibility for disclosing behavior. It fails to mention what the tool calculates (e.g., volume in cubic meters, liters), any assumptions (e.g., average depth), or formulas used. Completely lacking behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, short sentence, but it is under-specified. Conciseness is not valuable when it leaves critical information missing. Better to expand with structured details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and annotations are absent. The description should explain return values (e.g., unit of volume), but it does not. The tool's complexity (5 parameters, 2 required) demands more completeness than provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add any meaning beyond the schema's parameter descriptions (e.g., 'Shape', 'Avg depth m'). No extra context is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description 'Swimming pool volume calculation' is essentially a tautology of the tool name, adding no new information. It states the verb and resource but is vague and does not differentiate from sibling tools like 'calculate_pool_chlorine' or 'calculate_volume'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternative volume calculators or pool-specific tools. No context about prerequisites, scenarios, or exclusions is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_portage_salarialBInspect

Estimate net income from portage salarial (freelance via umbrella company). Returns: {portage_management_fee_10pct, social_charges_45pct, net_monthly, net_annual_estimate, net_ratio_pct}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
daily_rateYesDaily billing rate (TJM) in euros
days_per_monthNoBillable days per month (default 20)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description only says 'estimate net income', implying a read-only calculation, but lacks disclosure of assumptions (e.g., French tax system, default rates), country specificity, or accuracy. Does not add value beyond stating purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 10 words, efficient and front-loaded. No unnecessary information, but could be more informative without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should explain what the tool returns (e.g., net income per month, annual). Lacks context about portage salarial (French system) and how the estimate is computed. Incomplete for an agent to reliably use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both parameters have descriptions). The tool description does not add any additional meaning beyond what the schema provides. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'estimate' and resource 'net income from portage salarial', which is specific and distinguishes it from other calculate_* tools. No ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. No mention of prerequisites, limitations, or comparison with other financial calculation tools in the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_portfolio_allocationAInspect

Calculate portfolio allocation amounts by percentage for major crypto asset classes. Returns: {allocation_pct, sum_pct}. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
btc_pctNoBitcoin allocation percentage (default 40%)
eth_pctNoEthereum allocation percentage (default 30%)
alts_pctNoAltcoins allocation percentage (default 20%)
total_valueYesTotal portfolio value in fiat currency
stablecoins_pctNoStablecoins allocation percentage (default 10%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden. It only states the basic purpose without disclosing whether the tool is read-only, how it handles invalid percentages, or what the return format looks like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose. Every word contributes to clarity with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 5 parameters and no output schema, the description is minimally complete: it states the purpose but omits the return value structure (e.g., allocation amounts per asset) and any assumptions such as sum-to-100% validation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have descriptions in the schema (100% coverage), and the tool description adds little beyond 'by percentage'. The baseline of 3 is appropriate as the schema carries the semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate', the resource 'portfolio allocation amounts', and the scope 'by percentage for major crypto asset classes'. It distinguishes itself from a wide array of sibling calculators by specifying crypto asset classes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests use for crypto allocation percentage calculations, but provides no explicit guidance on when to use this tool versus alternatives like 'calculate_impermanent_loss' or other portfolio tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_power_unit_convertAInspect

Convert power values between W, kW, HP, BTU/h, cal/s. Returns: {original}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesPower value to convert
to_unitYesTarget unit
from_unitYesSource unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description only lists units and action, lacks details on precision, negative values, error handling, or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 10 words, no redundant information. Front-loaded with purpose and units.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple unit conversion tool with full schema coverage, the description provides sufficient context. No output schema needed as return value is obvious.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all 3 parameters with 100% description coverage. Description mentions units, but this is already in the enum values, adding no new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Convert' and identifies resource 'power values' followed by supported units. It clearly differentiates from sibling conversion tools for other domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives. Usage is implied by the name and description, but no exclusions or context are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pregnancy_due_dateBInspect

Calculate due date and current gestational week from last period. Returns: {due_date}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
last_period_dateYesLast menstrual period date YYYY-MM-DD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden but only states basic functionality. It does not disclose underlying assumptions (e.g., 280-day rule, cycle length), whether gestational week is based on current date, or if the due date is estimated or exact. This lacks sufficient detail for reliable use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single succinct sentence of 10 words, front-loading the action and outputs. Every word adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with one parameter, but the description omits return format and whether gestational week is dynamic. Given no output schema, the description should specify at least the output structure. It is minimally complete but lacks clarity on results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with a clear description of the date format. The tool description merely restates 'from last period' without adding new semantic value. Baseline 3 is appropriate as schema does the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates due date and gestational week from the last period. It specifies the resource and outputs, and it is distinct from siblings like calculate_breeding_due_date or calculate_due_date by explicitly mentioning 'pregnancy' and 'gestational week'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as calculate_due_date or animal pregnancy calculators. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_present_valueCInspect

Compute the present value (PV) of a future sum given a discount rate. Use in DCF, NPV, or retirement planning. Inputs: future value, annual rate %, years. Returns PV and discount factor. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rateYesAnnual discount rate percent
yearsYesNumber of years
future_valueYesFuture value EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states the purpose without mentioning assumptions (e.g., compounding frequency), error handling, or return value format. This is insufficient for a financial tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no extraneous words. It is concise and appropriately sized for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool and no output schema, the description should mention key details like the result unit (EUR) and assumed compounding (likely annual). Without these, the description is incomplete for accurate usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the parameters are already well-documented (rate as annual discount rate percent, years, future value in EUR). The description adds no additional semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool calculates the present value of a future sum, using a specific verb and resource. It is unambiguous but lacks any additional distinguishing features from sibling tools like calculate_future_value or calculate_compound_interest.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternative financial calculation tools. There is no mention of prerequisites, context, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pressure_convertAInspect

Convert pressure between Pa, kPa, MPa, bar, psi, atm, mmHg, mbar, torr. Use for engineering, weather, medicine. Inputs: value, from-unit, to-unit. Returns: {original}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesPressure value
to_unitYesTarget unit
from_unitYesSource unit

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden of behavioral disclosure. It states the tool 'converts' but does not mention that it is a pure read-only mathematical conversion with no side effects, authentication needs, or rate limits. While the behavior is implied, it lacks explicit safety assurances.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no redundant words. It efficiently conveys the tool's core purpose without extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description is adequate for a simple conversion tool, it fails to differentiate from the sibling tool 'convert_pressure', which may have an identical purpose. No explanation of return values (no output schema) is provided, but given low complexity, this is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters, including descriptions and enums for units. The description adds no additional meaning beyond restating the supported units. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts pressure between specified units (Pa, bar, psi, atm, mmHg, mbar), using a specific verb ('Convert') and resource ('pressure'). It effectively distinguishes from siblings by listing exact units supported.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for pressure conversion but provides no explicit guidance on when to use this tool versus alternatives (e.g., the sibling 'convert_pressure'). No when-not-to-use or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_prime_activiteBInspect

Estimate French prime d'activité monthly amount (CAF benefit). Use for low-income workers checking eligibility. Inputs: net monthly salary, household composition. Returns estimated benefit and eligibility note. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
salaryYesNet monthly salary in euros
household_sizeNoNumber of people in household (1-6)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the burden of behavioral disclosure. It says 'estimate' which implies approximation, but does not disclose that the calculation may rely on simplified rules, whether it covers all eligibility criteria, or any assumptions. For a benefit calculation tool, more transparency is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no waste. It is front-loaded with the key verb and resource. While more structure could be helpful (e.g., listing output), it remains concise and efficient for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no nested objects, no output schema), the description provides a basic but adequate overview. However, it lacks information about the return format, coverage of eligibility, or limitations, which would be helpful for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters (salary and household_size) with descriptions. The description adds no extra meaning beyond what the schema provides. Per guidelines, baseline is 3 for high coverage, and the description does not improve or harm that score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it estimates eligibility and amount for a specific French benefit (prime d'activite). The verb 'estimate' and the resource 'French prime d'activite' make the purpose unambiguous and distinct from other calculate tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives, no prerequisites, and no context about when the estimate is appropriate. It only states what it does.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_print_resolutionAInspect

Calculate print DPI quality and maximum print size from image pixel dimensions. Returns: {effective_dpi}. See list_bundles for related 'photographie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
image_width_pxYesImage width in pixels
print_width_cmYesDesired print width in centimeters
image_height_pxYesImage height in pixels
print_height_cmYesDesired print height in centimeters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must bear the full burden. It discloses the tool's purpose but doesn't detail behavioral traits like output units (e.g., DPI in dots per inch, print size in cm) or whether it calculates actual DPI or maximum print size. While it's a simple calculation, more transparency would help.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the core functionality but omits details like output format, unit conversions (cm vs. DPI), or assumptions about DPI standards. Without an output schema, this lack of completeness could hinder effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no additional semantic value beyond the parameter descriptions in the schema, which are basic (e.g., 'Image width in pixels').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('calculate') and clearly identifies the resource ('print DPI quality and maximum print size from image pixel dimensions'). It effectively distinguishes the tool among numerous sibling 'calculate_*' tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states what the tool does but provides no guidance on when to use it versus alternatives or any prerequisites. It lacks explicit context for appropriate usage, which is a gap given the large number of sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_probability_binomialBInspect

Calculate binomial probability P(X=k) and cumulative P(X<=k). Returns: {exact_probability, cumulative_probability, std_deviation}. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
kYesNumber of successes
nYesNumber of trials
pYesProbability of success per trial

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It states it calculates probabilities but does not mention return format, precision, or how it handles edge cases (e.g., k > n).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the core functionality. It is front-loaded but could benefit from slight expansion.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain what is returned. It does not specify if both point and cumulative are returned, or the format. Lacks details on precision and error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds no extra meaning beyond the parameter names and constraints in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates binomial probability P(X=k) and cumulative P(X<=k). It distinguishes from sibling tools like calculate_dice_probability by specifying the binomial distribution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Does not mention scenarios where binomial probability is appropriate or conditions like independent trials.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_profit_marginCInspect

Calculate gross margin, net margin, and markup percentage. Returns: {revenue, cost}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
costYesTotal cost
revenueYesTotal revenue/selling price

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior but only states the calculation purpose. It does not mention that this is a read-only, non-destructive operation, nor any side effects, output complexity, or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, using only eight words. It is clear and to the point, though it could benefit from slight elaboration on output structure without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should explain what the tool returns (e.g., an object with three fields). It fails to do so, leaving the agent uncertain about the response format, which is a critical gap for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing adequate parameter descriptions. The description adds no additional meaning beyond the schema, so it meets baseline expectations but does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates gross margin, net margin, and markup percentage, specifying the resource and expected outputs. However, it does not differentiate from sibling tool 'calculate_markup_margin', leaving potential ambiguity for the agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_markup_margin' or other margin calculators. The description lacks context about prerequisites or use cases, leaving the agent without direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_projectile_motionCInspect

Compute projectile trajectory: range, max height, time of flight. Use for physics or ballistics. Inputs: initial velocity, launch angle, height. Returns range, peak, flight time. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
heightNoInitial height m
velocityYesInitial velocity m/s
angle_degYesLaunch angle degrees

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must fully disclose behavior. It only states the general purpose without revealing details like what outputs are computed (e.g., range, time of flight, max height), assumptions (e.g., no air resistance), or unit consistency beyond what the schema inherently provides. This lack of behavioral detail makes it difficult for an agent to anticipate the tool's full effect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that immediately conveys the topic. While it is front-loaded and free of fluff, it is so brief that it sacrifices necessary detail. A slightly longer description (e.g., adding output hints) would improve completeness without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema, the description should inform the agent about what the tool returns (e.g., range, maximum height, time of flight). It fails to do so, leaving a significant gap. For a well-known physics problem, the description is too minimal to fully prepare the agent for using the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear labels ('Initial height m', 'Initial velocity m/s', 'Launch angle degrees'). The description adds no additional meaning or context beyond these schema descriptions, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Projectile trajectory calculations' clearly states the domain and resource type. It distinguishes this tool from sibling physics calculators like calculate_kinetic_energy or calculate_distance_2d by specifying projectile motion. However, it could be more specific about the exact calculations (e.g., range, max height) to achieve a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as calculate_kinetic_energy or calculate_distance_2d. There is no mention of prerequisites, limitations, or scenarios where another tool would be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_property_capital_gains_frAInspect

Calculate French property capital gains tax after holding-period abatements. Returns: {raw_gain_eur, taxable_ir_eur, taxable_ps_eur, tax_ir_eur, tax_ps_eur, total_tax_eur, ...}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sale_priceYesSale price EUR
holding_yearsYesYears held
purchase_priceYesPurchase price EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes a calculation (non-destructive) but does not explicitly state read-only behavior, authorization needs, or other traits. The verb 'Calculate' implies no side effects, but lacks explicit transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's purpose with no unnecessary words. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description provides sufficient context for a simple calculator tool with three numeric parameters. However, it does not specify the return format (e.g., numeric value) or any assumptions, which would be helpful given the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides descriptions for all three parameters (purchase_price, sale_price, holding_years) with 100% coverage. The description adds domain-specific context by linking parameters to French property capital gains tax and holding-period abatements, enhancing meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates French property capital gains tax after holding-period abatements, with a specific verb and resource. It distinguishes from sibling tools like 'calculate_capital_gains_property' by specifying French property and abatements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for French property capital gains tax but does not explicitly state when to use this tool versus alternatives. No exclusions or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_property_tax_estimate_frCInspect

Estimate French taxe foncière from cadastral value and commune rate. Returns: {estimated_tax_eur, taxable_base, commune_rate_pct}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
commune_rateYesCommune tax rate percent
cadastral_valueYesValeur locative cadastrale EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description should bear full burden. It states inputs but fails to disclose limitations, assumptions (e.g., simplified calculation), or the nature of the output (e.g., EUR amount). The description adds minimal behavioral context beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, directly stating the purpose without any extraneous information. It is perfectly concise for the given complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With only two simple numeric parameters and no output schema, the description should at least hint at the output unit (e.g., EUR) or assumptions. The current description omits these details, making it incomplete for a tax estimation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema descriptions, merely restating the inputs. No extra context on parameter ranges or formats is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Estimate French taxe foncière from cadastral value and commune rate', specifying the verb (Estimate) and resource. However, the existence of a sibling tool 'calculate_property_tax_fr' (noted in sibling list) is not differentiated, causing potential confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided. The description does not indicate when to use this tool versus alternatives (e.g., 'calculate_property_tax_fr'), nor any prerequisites or assumptions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_property_tax_frBInspect

Compute French taxe foncière (annual property tax). Use for owners of property in France. Inputs: cadastral rental value (valeur locative), commune rate %. Returns annual tax due. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cadastral_valueYesCadastral rental value (valeur locative cadastrale) in EUR
commune_rate_pctNoCommune tax rate in % (default 25)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states 'calculate' with no mention of side effects, return value structure, or whether it's read-only. While calculation is likely non-destructive, the description lacks transparency about what the tool actually does beyond the minimal statement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and front-loaded with the main action and object. No unnecessary words or repetition. Every part earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (two parameters, no output schema), the description is minimally adequate. It explains what the tool does but lacks detail on return values, default assumptions (e.g., commune_rate_pct default of 25%), and how results are formatted. It could mention that the calculation uses cadastral rental value and a commune rate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema achieves 100% coverage with clear descriptions for both parameters. The description adds the context of 'French taxe fonciere' but no additional semantic meaning beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Calculate'), the specific resource ('French taxe fonciere property tax'), and the country. It distinguishes from sibling tools like 'calculate_property_capital_gains_fr' and 'calculate_property_tax_estimate_fr' by focusing on the exact tax type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. For example, it does not indicate that this is for property owners or that it only computes gross tax before deductions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_property_transfer_taxCInspect

Compute French property transfer tax (droits de mutation) by department. Use for property buyers. Inputs: property price, department, type (new/old). Returns tax due and effective rate. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
priceYesProperty price in local currency
countryYesCountry code: FR/BE/US/UK/DE

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states it calculates tax, omitting whether the result is a percentage, amount, or any assumptions. No safety or cost information is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence with no redundant words. It is front-loaded and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite low complexity (2 params, no output schema), the description is too minimal. It omits return format, currency, or typical use cases, leaving gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are documented. The description adds value by specifying 'by country' but does not explain tax formulas or additional nuances beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates property transfer or registration tax by country, using a specific verb and resource. It distinguishes from siblings like calculate_stamp_duty_uk or calculate_property_tax_fr, though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this generic tool versus country-specific siblings (e.g., calculate_stamp_duty_uk). The description lacks context about alternatives or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ptz_eligibilityBInspect

Check French PTZ (zero-rate loan) eligibility and maximum amount. Returns: {income_ceiling, ptz_max_pct_of_operation, note}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneYesGeographic zone of the property
household_sizeYesNumber of people in household (1-5+)
household_incomeYesAnnual household income (revenu fiscal de reference) in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description only states the basic purpose. It does not disclose behavioral traits such as whether it is read-only, requires authentication, or has side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, front-loaded and concise. It is efficient but could be slightly more informative without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and three required parameters, the description is too brief. It does not mention return values, edge cases, or any additional context needed to understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter. The description adds no additional meaning beyond what the schema provides, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Check French PTZ (zero-rate loan) eligibility and maximum amount' with a specific verb and resource. It distinguishes from siblings as no other tool appears to cover PTZ eligibility.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. There is no mention of conditions, preconditions, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_puissance_fiscaleBInspect

French fiscal horsepower CV = (CO2/45) + (P_kW/40)^1.6. Returns: {cv_raw, cv_fiscaux}. See list_bundles for related 'auto-transport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
co2_g_kmYesCO2 g/km
power_kwYesEngine power in kW

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description must convey behavioral traits. It only provides the formula, omitting details like that it is a pure calculation with no side effects, requires no authentication, or that it returns a scalar value. The behavioral implications are left implicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of a single sentence that contains the formula. Every element is essential, with no redundant words. It is appropriately front-loaded for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two parameters, no output schema), the formula gives a good sense of the calculation. However, it does not explicitly state the output unit (CV) or any additional context like valid ranges or edge cases. The description is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for both parameters (CO2 g/km and Engine power in kW). The description adds the formula but does not further clarify parameter meaning or units beyond the schema. It provides marginal additional value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the formula for calculating French fiscal horsepower, clearly indicating the tool's purpose. It uses the verb 'calculate' implicitly by providing the equation. However, it does not distinguish this tool from the many other calculator siblings, as it lacks context on unique usage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, typical scenarios, or exclusions. The description merely gives the formula without any contextual advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pump_powerCInspect

Compute pump power requirement. P=ρ·g·H·Q/η. Use for fluid system design. Inputs: flow m³/h, head m, fluid density, efficiency. Returns kW. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
head_mYesHead m
flow_m3hYesFlow rate m³/h
efficiencyNoPump efficiency

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behaviors like calculation method (e.g., using hydraulic power formula), default efficiency, or unit consistency. It provides none, leaving the agent unaware of assumptions or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (3 words) but lacks substance. Conciseness without completeness is not helpful; it should be front-loaded with a clear action statement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of pump power calculation and the absence of annotations and output schema, this description is severely incomplete. An agent cannot infer the formula, return format, or units from this text.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no additional meaning to the parameters—such as explaining the relationship between flow, head, and efficiency—but the schema itself gives basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Pump power requirement' is a noun phrase that does not explicitly state the action (calculating) and barely adds meaning beyond the tool name. It fails to specify what the tool does—e.g., 'Calculate the power required to operate a pump given flow rate, head, and efficiency.' This is a tautological restatement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the many other calculate_* siblings. There is no mention of prerequisites, assumptions, or suitable scenarios, leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_purchasing_powerCInspect

Compare purchasing power between two years. Use to translate historical prices, salaries, or savings to today's value. Inputs: amount, from-year, to-year, average inflation %. Returns equivalent value. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to compare
to_yearYesTarget year
from_yearYesStarting year
avg_inflationNoAverage annual inflation rate in %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It does not disclose assumptions (e.g., default inflation rate), limitations, or whether the result is an adjusted amount or a comparison ratio. The behavior beyond the basic function is opaque.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that conveys the core purpose with no unnecessary words. It is maximally concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters and no output schema, the description lacks essential context such as what the function returns (e.g., adjusted amount, percentage change) and how the optional parameter affects results. A more complete description would explain the output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptive parameter names and descriptions (e.g., 'Amount to compare', 'Starting year'). The description adds no further semantic value beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Compare purchasing power between two years' clearly states the action (compare), resource (purchasing power), and context (between two years). It distinguishes well from siblings like 'calculate_inflation_adjusted_value' by using a different verb, but does not explicitly differentiate from all similar tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'calculate_inflation_adjustment' or other financial calculators. The description does not mention prerequisites or context of use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pyramidCInspect

Compute pyramid volume V=(1/3)·base_area·height. Use for geometry or architecture. Inputs: base area, height (and optional slant for surface area). Returns volume and surface area. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
heightYesPyramid height
base_lengthYesBase side length

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must fully disclose behavior. It only states 'Pyramid volume', failing to mention return value, formula, or units. The tool's behavior is opaque beyond the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at two words, but it sacrifices clarity and completeness. It does not front-load useful information; it is under-specified rather than efficiently concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 parameters and no output schema, the description still lacks completeness. It does not state the pyramid type (e.g., square base) or that it computes V = (1/3)*base_area*height. More context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions. The tool description adds no extra meaning, but schema descriptions suffice for basic understanding. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Pyramid volume' clearly indicates the tool calculates pyramid volume, matching the name. It is a specific verb-resource pair. However, it does not differentiate from numerous sibling calculation tools, lacking unique identifiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool over alternatives. There is no mention of context such as base shape (e.g., square pyramid) or any exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_pythagorasBInspect

Find missing side of right triangle using Pythagorean theorem. Returns: {error}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aNoSide a length
bNoSide b length
cNoHypotenuse c length

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should disclose important behavioral traits. However, it only states the core function and does not mention requirements (exactly two inputs needed), error conditions (e.g., if wrong combination), or assumptions (right triangle). With zero annotation support, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, front-loading the key information. It is efficient, though it could be slightly improved by adding a brief usage note (e.g., 'Provide exactly two sides'). Still, it is concise and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters and no output schema, the description is incomplete. It fails to specify that exactly two of the three parameters must be provided, that the triangle must be right-angled, or what happens if invalid input is given. An agent needs more context to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with each parameter clearly labeled ('Side a length', etc.). The description adds no additional meaning beyond the schema, so it meets the baseline but does not improve parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's specific purpose: 'Find missing side of right triangle using Pythagorean theorem'. It uses a specific verb ('find') and resource ('missing side of right triangle'), and the unique name and description differentiate it from many sibling calculator tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for right triangle calculations but does not explicitly state when to use this tool versus alternatives, nor does it mention prerequisites (e.g., need exactly two sides) or exclusions. The guidance is implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_quadratic_equationBInspect

Solve quadratic equation ax²+bx+c=0 with discriminant analysis. Use for math homework or physics problems. Inputs: coefficients a, b, c. Returns roots (real or complex), discriminant, and vertex. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesCoefficient a
bYesCoefficient b
cYesCoefficient c

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description only mentions 'solve' and 'find vertex' without disclosing edge cases (e.g., a=0, complex roots), error handling, or return format. Minimal behavioral insight beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no redundancy, front-loaded with the core purpose. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite simplicity, the description omits what the tool returns (roots, vertex coordinates). No output schema exists, so the description should clarify the output. Lacks completeness for an agent to fully understand the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter (Coefficient a, b, c). The description adds context by naming the equation form, but does not provide additional meaning beyond what the schema already implies. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it solves the quadratic equation ax²+bx+c=0 and finds the vertex, which is specific and distinguishes it from sibling tools like calculate_equation or other calculate tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no prerequisites or conditions provided. The description only states what it does, not when it's appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_quebec_income_taxAInspect

Calculate Quebec provincial income tax (Revenu Québec) with basic personal amount deduction. Returns: {income_cad, basic_personal_amount, taxable_income, provincial_tax, effective_rate_pct, marginal_rate_pct, ...}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
income_cadYesAnnual income in Canadian dollars (CAD)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It only mentions 'basic personal amount deduction' but doesn't disclose if other deductions or credits are accounted for, what tax year applies, or if it returns an exact amount. This is insufficient for a mutation-like calculation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, directly states purpose and key detail (deduction). No fluff, front-loaded. Effective for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one parameter and no output schema, the description is adequate for basic use. However, it could mention that it uses current tax rates or that no additional deductions are included. The sibling set is large, but the description doesn't leverage that for differentiation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a description for income_cad. The tool description adds 'with basic personal amount deduction', which indirectly clarifies parameter use. However, it doesn't add significant new meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Quebec provincial income tax including the basic personal amount. It distinguishes itself from sibling tax calculators (e.g., canada_federal_tax, canada_combined_tax) by specifying 'Quebec provincial' and 'Revenu Québec'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use vs. alternatives. The description implies it's for Quebec provincial tax, but doesn't mention that for federal or other province taxes, other tools should be used. The context of many tax siblings makes this a gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_race_predictionBInspect

Predict race time for a target distance using Riegel formula. Returns: {predicted_time_minutes, predicted_formatted, predicted_pace_min_km}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
target_distance_kmYesTarget race distance in km
reference_distance_kmYesReference race distance in km
reference_time_minutesYesReference race time in minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It only mentions the formula name but omits assumptions (e.g., valid distance range), limitations, or side effects. This is insufficient for a prediction tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is very concise, but could benefit from slight expansion to mention output format or formula constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description does not hint at return format (e.g., predicted time in minutes). The formula context is given but not elaborated. Adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for each parameter. The description adds no extra meaning beyond the schema, meeting the baseline but not improving it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('predict'), the resource ('race time for a target distance'), and the method ('Riegel formula'). It is specific and distinguishes the tool from many calculation siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like calculate_running_pace or calculate_marathon_splits. No prerequisites or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_radioactive_decayCInspect

Compute remaining quantity after radioactive decay. Use for physics or carbon dating. Formula: N=N0·(0.5)^(t/half-life). Inputs: initial qty, half-life, elapsed time. Returns remaining qty. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
timeYesTime elapsed
initialYesInitial amount
half_lifeYesHalf-life

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It only gives the formula but does not specify that the operation is a pure calculation (read-only), mention any edge cases (e.g., half-life of zero is prevented by minimum 0.001), or indicate the return format. The transparency is insufficient for an agent to understand side effects or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: a single line containing the formula. Every word is necessary and there is no superfluous information. For a simple mathematical tool, this level of conciseness is ideal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool is simple, the description lacks any mention of the return value (e.g., 'Returns the remaining amount N'). Without an output schema, the agent would benefit from knowing what the tool returns. Additionally, units or example values are absent. The formula provides the calculation logic but not all contextual information an agent might need.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with each parameter having a brief description ('Initial amount', 'Half-life', 'Time elapsed') and minimum constraints. The tool description adds no additional semantic context beyond the formula; it maps parameters to symbols (N0, t_half, t) but does not clarify units or expected ranges. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the radioactive decay formula, which implies the tool calculates remaining amount. However, it does not explicitly state the verb (e.g., 'calculate remaining amount') or distinguish it from sibling tools like 'calculate_caffeine_half_life' or 'calculate_compound_interest' which involve similar exponential decay. The formula is clear but the purpose could be more direct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description lacks any mention of context, prerequisites, or situations where this formula is appropriate. With many sibling calculator tools, the agent receives no help in deciding to use this one over similar ones.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rainwater_collectionCInspect

Estimate annual rainwater collection volume from a roof. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
roof_area_m2YesRoof catchment area in square metres
efficiency_pctNoCollection efficiency percentage (default 80%)
annual_rainfall_mmYesAverage annual rainfall in millimetres

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only says 'estimate', which hints at a non-destructive read operation, but does not reveal calculation assumptions, potential ranges, or return details. Insufficient for safe agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, no wasted words. It is appropriately concise for a simple calculator, though it lacks some details that could be included without harming brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description should explain what the tool returns (e.g., volume in liters). It does not. Given many sibling tools, more context on when to use this specifically is missing. Incomplete for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema provides 100% coverage with clear descriptions for all three parameters. The description adds no additional meaning beyond stating that it estimates volume, so it does not improve parameter understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Estimate' and resource 'annual rainwater collection volume from a roof', clearly indicating what the tool does. It effectively distinguishes from the many sibling 'calculate_*' tools by focusing on a niche calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus the many other calculation tools. Lacks context about prerequisites, typical use cases, or alternatives, making it hard for the agent to select this over similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_raised_bed_soilCInspect

Compute soil volume needed to fill a raised garden bed. Use for gardening setup. Inputs: length, width, depth (m). Returns soil m³, bag count, mix recommendation (40% compost / 30% topsoil / 30% sand). See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
width_mYesRaised bed width in meters
depth_cmNoRaised bed depth in centimeters (default 30cm)
length_mYesRaised bed length in meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It does not mention the output unit (e.g., liters or cubic meters), whether it assumes a standard soil mix ratio, or any calculation assumptions. This is insufficient for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with a single sentence. It is front-loaded and contains no fluff. However, it could be slightly more informative without losing conciseness, e.g., specifying output units.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should clarify what the tool returns (e.g., total volume in specific units). It does not mention return values or any additional context like soil compaction or mix proportions. The description is incomplete for a calculation tool without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so all parameters have descriptions. The description adds no extra meaning beyond the schema, which already defines length, width, and depth with units. Baseline score of 3 is appropriate as the description does not compensate for any schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates soil mix volumes for a raised garden bed. It is specific with verb and resource, but among many sibling calculators, it does not differentiate itself from other soil volume calculators like calculate_garden_soil. However, the focus on 'raised bed' adds specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. Given the large number of sibling calculators, the description should indicate if this tool is specifically for raised beds and not for other contexts like in-ground gardens or compost volumes. Lack of exclusions or context reduces helpfulness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ratio_simplifyBInspect

Simplify a ratio to lowest terms using GCD. Use for proportions, mixing, or scaling. Inputs: a, b (and optional c). Returns simplified ratio. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
numeratorYesNumerator
denominatorYesDenominator

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the full burden for behavioral disclosure. It correctly implies a safe, read-only computation, but it does not mention any potential edge cases (e.g., very large numbers), the output format, or that it uses integer arithmetic via GCD. The transparency is adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no extraneous words. Every word serves a purpose, achieving maximum conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should clarify what the result looks like (e.g., a tuple or formatted string). It does not. Given the simplicity of the tool (two positive integer inputs), the description is minimally adequate but lacks enough detail for an agent to fully understand the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, both parameters have clear labels ('Numerator', 'Denominator'). The description adds no new semantic meaning beyond the schema, as 'simplify a ratio' merely restates the parameter roles. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'simplify' and resource 'ratio', indicating the tool reduces a ratio to its lowest terms. While it is specific enough to distinguish from many siblings like 'calculate_fraction_operations', it could be more explicit by mentioning the greatest common divisor (GCD) method.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not specify when to use this tool over alternatives such as 'calculate_fraction_operations' or any other ratio-related tools. Agents receive no context about prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_reading_timeBInspect

Estimate reading time for a text based on word count. Returns: {hours_minutes}. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
word_countYesNumber of words in text
reading_speed_wpmNoReading speed words per minute

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description does not disclose behavioral details such as rounding, default reading speed (though present in schema), or return value format (minutes vs. string).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no filler. However, it could be slightly expanded to improve completeness without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and the description does not mention return value format (e.g., minutes, seconds, string). For a simple estimation tool, this lack of completeness may cause confusion.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already defines both parameters (word_count, reading_speed_wpm) with descriptions. The description adds no extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates reading time based on word count. It is specific and distinct from sibling tools like calculate_cooking_time or calculate_exposure_triangle.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other time-related calculators). No conditions or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_real_estate_agency_feesBInspect

Calculate French real estate agency fees using sliding scale. Returns: {agency_fees, scale}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sale_priceYesProperty sale price in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must disclose behavioral traits. It mentions 'sliding scale' but does not explain the scale's tiers or range, nor does it describe the output format, read-only nature, or any side effects. This is insufficient for an agent to invoke safely.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (8 words) but lacks essential details like what the output is or how the sliding scale works. While not verbose, it is underspecified for a tool that should provide actionable guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one parameter, no output schema), the description is incomplete. It omits the range of the sliding scale, whether fees are buyer or seller fees, and the output structure (e.g., return a single fee amount or breakdown). More context is needed for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'sale_price' well-described. The description adds 'using sliding scale', which hints at the calculation method but does not clarify the parameter's role beyond the schema. Baseline of 3 is appropriate as the description adds marginal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate' and the specific resource 'French real estate agency fees', with the method 'using sliding scale'. This distinguishes it from sibling tools like 'calculate_notary_fees' or 'calculate_property_tax_fr'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With many sibling tools for French real estate costs, the description should include context like 'Use this for agency fees, not notary fees or property taxes.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_recipe_nutritionCInspect

Compute total calories, protein, carbs, fat for a recipe and per serving. Use for meal planning or nutrition labels. Inputs: list of ingredients with grams, servings count. Returns macro breakdown per serving and total. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ingredientsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. It only describes the high-level purpose ('sum macronutrients proportionally to quantities') but fails to mention return format, potential rounding, or any assumptions about input validity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, making it concise and front-loaded, but it is too sparse to provide adequate guidance. While not verbose, it sacrifices completeness for brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is severely incomplete. With no annotations, no output schema, and a complex input (array of objects), it fails to cover essential details like return structure, error handling, or usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not add any meaning to the 'ingredients' parameter beyond what is already in the schema. With 0% schema description coverage, the description should explain how per-100g values and quantity_g are used, but it does not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'sum' and the resource 'macronutrients for a list of ingredients proportionally to quantities'. It is specific and distinguishes this tool from numerous sibling calculators like calculate_calories_burned or calculate_daily_protein.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or context for when to choose calculate_recipe_nutrition over similar tools like calculate_recipe_scaling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_recipe_scaleCInspect

Scale recipe quantities to a new servings count. Use for adjusting recipes. Inputs: ingredients with quantities, original servings, target servings. Returns adjusted quantities. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesOriginal ingredient amount
target_servingsYesTarget servings
original_servingsYesOriginal servings

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must inform behavior. It does not mention any behavioral traits such as rounding, precision, units, or limits. The description is too minimal to disclose what happens during scaling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (4 words), but it is under-specified rather than appropriately concise. It omits critical information, so it does not earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should compensate but fails to do so. It does not explain the return format, expected behavior, or any constraints, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, meaning all parameters have descriptions. The tool description adds no extra meaning beyond what the schema already provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Scale recipe ingredients' is a verb-noun phrase that conveys the general purpose, but it is vague and does not specify the exact operation (e.g., multiplying amounts by a factor). It does not differentiate from sibling tools like 'calculate_recipe_scaling' or 'calculate_recipe_nutrition'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of context, prerequisites, or exclusions, leaving the agent without direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_recipe_scalingCInspect

Scale recipe ingredients up or down by a factor or new servings count. Use for cooking adjustments. Inputs: ingredients list, original servings, new servings. Returns scaled ingredient list. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
targetYesTarget servings
originalYesOriginal servings
ingredientsYesIngredients

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for behavioral disclosure. It states only that the tool scales ingredients, but does not mention whether it returns scaled quantities, handles unit conversions, or truncates decimals. Key behaviors like output format rounding are omitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at four words. It is efficient and free of fluff, though it could benefit from slightly more detail (e.g., 'Scale recipe ingredients based on serving size'). Still, it is not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and three required parameters. The description does not explain what the tool returns (e.g., scaled ingredient list with adjusted quantities) or any edge cases (e.g., fractional servings). For a tool with no output schema, this is a notable gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the schema includes descriptions for all three parameters ('Target servings', 'Original servings', 'Ingredients'). The tool description adds no additional meaning beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Scale recipe ingredients' indicates the tool's verb and resource, but it is vague about what 'scale' means (e.g., by servings, by factor). It does not differentiate from the similar sibling tool 'calculate_recipe_scale', leaving ambiguity about the exact scaling method.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not specify when to use this tool over alternatives like 'calculate_recipe_scale' or 'calculate_recipe_nutrition', nor does it mention any prerequisites or context for scaling recipes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_regular_polygonDInspect

Compute area, perimeter, and apothem of a regular n-gon. Use for geometry or tiling. Inputs: number of sides, side length. Returns full geometric properties. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sidesYesNumber of sides
lengthYesSide length

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description offers no behavioral insights (e.g., whether the tool is read-only, error conditions, or return format). The description is essentially a label with no behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At only two words, the description is underspecified rather than concise. It lacks necessary detail, making it inadequate for understanding tool usage. Compare to the LOW calibration example 'Process' which also scored 2.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema and no annotations, the description should explain what the tool returns. It fails to do so, leaving the agent without critical context on output or behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters ('Number of sides' and 'Side length'), so the schema already conveys meaning. The description adds no additional semantic value beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Regular polygon properties' is vague; it does not specify a verb or the specific properties calculated. It partially states the resource but lacks action and scope, making it unclear what the tool does beyond a general topic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines provided. The description does not indicate when to use this tool over alternatives like other geometric calculators, nor does it mention prerequisites or context for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rental_profitabilityCInspect

Compute net rental profitability after taxes and charges. Use for real estate investment analysis. Inputs: purchase price, monthly rent, charges, vacancy %, tax bracket. Returns net yield % and cash flow. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_taxYesAnnual property tax in EUR
monthly_rentYesMonthly rental income in EUR
purchase_priceYesPurchase price in EUR
monthly_chargesYesMonthly charges/expenses in EUR
notary_fees_pctNoNotary fees as % of price (default 8)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility. It only states the tool 'calculates' without disclosing behavioral details such as whether it accounts for taxes, fees, or uses specific formulas. This is insufficient for a financial calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no unnecessary words. It is front-loaded and efficient, though it could briefly elaborate on the output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should clarify the return value. It implies two outputs (profitability and cash flow) but lacks detail. With 5 parameters, the description is minimally adequate but leaves questions about the format or units of results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema's parameter descriptions. The mention of 'annual cash flow' in the description is not reflected in any parameter, so no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates rental investment profitability and annual cash flow. It uses a specific verb and resource, but does not explicitly differentiate from sibling rental yield calculators, which is a minor gap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_rental_yield' or 'calculate_rental_yield_net'. The description lacks any context about prerequisites or typical scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rental_yieldBInspect

Calculate gross and net rental yield for a real estate investment. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_rentYesAnnual rental income in EUR
annual_chargesNoAnnual charges/expenses in EUR (default 0)
purchase_priceYesPurchase price in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must carry behavioral disclosure. It does not clarify output format, side effects (none expected for a calculator), or how net yield is computed (e.g., whether annual_charges is subtracted). Minimal behavioral insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise but lacks structure. It could be improved with a brief breakdown of what is returned (gross and net values) but remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and the presence of sibling tools for individual yield components, the description should clarify that both yields are computed simultaneously. It does not, leaving completeness gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-documented. The description adds minimal value by implying annual_charges is needed for net yield, but this is implicit. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates 'gross and net rental yield' for real estate investment, distinguishing it from sibling tools like 'calculate_rental_yield_gross' and 'calculate_rental_yield_net' which focus on individual yield types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this combined tool versus the separate gross/net tools or other alternatives like 'calculate_rental_profitability'. The description lacks context about optimal use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rental_yield_grossBInspect

Calculate gross rental yield from property price and monthly rent. Returns: {gross_yield_pct}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthly_rentYesMonthly rent in EUR
purchase_priceYesProperty purchase price in EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits. For a calculation tool, it does not state that the operation is safe and has no side effects or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with the action 'Calculate', and contains no unnecessary words. It is maximally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description fails to mention the return value format (e.g., percentage or decimal) and does not explain the formula or assumptions behind 'gross rental yield'. Given the absence of an output schema, this leaves the agent uncertain about the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides full coverage with explicit descriptions for both parameters. The tool description adds no additional semantic meaning beyond what the schema already conveys.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates gross rental yield from property price and monthly rent. It includes 'gross' in the description, distinguishing it from sibling tools like calculate_rental_yield_net.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool over siblings such as calculate_rental_yield_net or calculate_rental_profitability. The agent must infer usage purely from the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rental_yield_netAInspect

Calculate net rental yield after charges and vacancy. Returns: {net_yield_pct, effective_annual_rent, net_income}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthly_rentYesMonthly rent EUR
vacancy_rateYesVacancy rate percent
annual_chargesYesAnnual charges, taxes, insurance EUR
purchase_priceYesProperty price EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It implies a read-only calculation with no side effects, but does not explicitly state behavioral traits like idempotency or required permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence of 8 words conveys the essential purpose. No unnecessary words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with well-documented schema and no output schema, the description is adequate. It could mention the return format (e.g., percentage), but it's inferable from the purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema documents parameters sufficiently. The description adds no extra meaning beyond the schema, matching the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'net rental yield', and specifies 'after charges and vacancy', which distinguishes it from sibling tools like calculate_rental_yield_gross.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies it's for net yield, but does not mention when not to use it or compare with siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rent_increase_irlBInspect

Calculate rent increase allowed by French IRL index. Returns: {new_rent_eur, increase_eur}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
new_irlYesLatest published IRL
old_irlYesIRL at lease start
current_rentYesCurrent rent EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It does not disclose what the output is (e.g., new rent or increase amount), any rounding behavior, or assumptions about lease types. This is inadequate for a mutation-like calculation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no wasted words. It front-loads the key purpose and is well-structured for quick scanning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema and is a calculation with three parameters, the description should explain what the result is or the formula. It does not, leaving the agent to guess the output structure. This is a significant gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes each parameter with clear names and descriptions (100% coverage). The description adds no extra meaning beyond what the schema provides, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates rent increase using the French IRL index, which is a specific verb+resource. Among many calculate tools, it distinguishes itself by naming the specific index and purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool vs. alternatives, no prerequisites, and no examples of when not to use it. The agent must infer usage from the name and schema alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rent_ratioDInspect

Compute the rent-to-income ratio. Use to assess housing affordability (rule of thumb: keep under 33%). Inputs: monthly rent, monthly gross income. Returns rent ratio % and verdict. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
rentYesMonthly rent
incomeYesMonthly income

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description offers no behavioral traits, such as what the tool returns (e.g., a ratio value), any side effects, or access requirements. The agent is left guessing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At only two words, the description is under-specified, not concise. It fails to provide a meaningful overview, making it unhelpful for an AI agent to understand the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description should at least state that the tool returns the rent-to-income ratio. It does not, leaving the return value ambiguous. The description is incomplete for a simple calculation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for 'rent' and 'income'. The description adds no additional meaning, but baseline is maintained as the schema already documents adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Rent-to-income ratio' is a tautology of the tool name 'calculate_rent_ratio'. It does not state the action or outcome, merely restating the ratio concept.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other financial calculators among the extensive sibling list. Context for usage is entirely absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_retirement_dateBInspect

Estimate retirement date from birth date and country legal retirement age. Returns: {retirement_age, retirement_date, years_remaining, already_retired, note}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryYesCountry: FR=64 years, US=67 years, UK=66 years
birth_dateYesYYYY-MM-DD — Date of birth

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as input validation, error handling, output format, or assumptions (e.g., birth date in the past). The schema adds some context (country-specific ages) but the description itself is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, immediately stating the purpose. No redundant information, and it is well front-loaded with the verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with 2 parameters and no output schema, but the description does not specify the output format or handle edge cases. More details (e.g., return type, date format) would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for both parameters. The description adds no extra meaning beyond what the schema already provides (birth_date format, country enum with ages). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('estimate') and resource ('retirement date'), clearly stating the tool's functionality. It also distinguishes itself from other retirement-related siblings (e.g., calculate_retirement_savings_gap, calculate_retirement_pension) by focusing on date calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor any mention of prerequisites, exclusions, or edge cases. The description implies usage but lacks explicit context for decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_retirement_pensionBInspect

Estimate French basic retirement pension (retraite de base Assurance Vieillesse). Returns: {average_salary_best25, annual_pension, max_monthly_pension, prorata_pct}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
target_yearsNoTarget quarters for full pension (default 172 = 43 years)
years_contributedYesTotal years of contribution
average_salary_best25YesAverage annual salary of best 25 years in euros

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fails to disclose important behavioral traits like whether the tool is read-only, if it requires authentication, or what the output looks like. The term 'estimate' implies a non-committal calculation, but no details on safety or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-formed sentence that immediately conveys the tool's purpose. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not explain what the estimate returns (e.g., annual amount, monthly), nor does it cover edge cases like zero years contributed. This is a significant gap given the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides full descriptions for all parameters (100% coverage). The tool description adds no extra meaning or context beyond what the schema offers, such as clarifying the units or formula.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates the French basic retirement pension, specifying the pension system (Assurance Vieillesse). This distinguishes it from other pension tools like calculate_belgian_pension.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as other French pension calculators or general retirement tools. No when-not-to or alternative suggestions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_retirement_savings_gapCInspect

Project retirement savings vs need and identify shortfall. Use for retirement planning. Inputs: current age, retirement age, current savings, monthly contribution, target income. Returns projected balance and gap. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
current_ageYesCurrent age
savings_rateYesAnnual return rate percent
monthly_incomeYesDesired monthly retirement income EUR
retirement_ageYesTarget retirement age
current_savingsYesCurrent savings EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations; description fails to disclose key behaviors (e.g., read-only, computation method, what 'shortfall' means). Only a vague function statement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficient sentence with no wasted words. Front-loaded purpose, but excessive brevity sacrifices completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description does not explain return value (e.g., numeric gap, percentage). Missing information on what the tool provides, making it incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so full parameter documentation exists. Description adds no extra meaning beyond the schema, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states tool projects retirement savings and identifies shortfall. It distinguishes from generic calculators, but could be more specific about output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No context about prerequisites or when it's appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_reverb_predelayBInspect

Calculate optimal reverb pre-delay based on room size and musical tempo. See list_bundles for related 'musique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bpmNoTempo in BPM (used to snap pre-delay to musical grid)
room_length_mYesRoom length in meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavioral traits like side effects, permissions, or constraints. The description is too brief to provide meaningful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single line, front-loaded with key information, no unnecessary words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 parameters and no output schema, the description is minimal but covers the essential purpose. However, it lacks usage guidelines and behavioral transparency, which limits completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds 'room size and musical tempo' which maps to the parameters but does not add new semantic information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'optimal reverb pre-delay', and specifies inputs 'room size and musical tempo'. It is specific and distinguishes from many sibling calculate_* tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, or any conditions, exclusions, or when not to use. The description is purely functional.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ring_sizeBInspect

Find ring size in EU/US/UK/JP given finger circumference or diameter. Use for jewelry shopping. Inputs: circumference mm or diameter mm. Returns size in EU, US, UK, JP systems. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
from_systemYes
circumference_mmYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It states the conversion direction but does not clarify the function of the 'from_system' parameter. The parameter name suggests it indicates the source system, but the description implies circumference is always in mm, making this ambiguous. Edge cases or return format are not addressed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at one sentence, which is adequate. However, for clarity, additional context about parameter usage is needed, making it slightly under-specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With two parameters, no output schema, and no annotations, the description should provide more detail on how the parameters interact and what the output looks like. The ambiguity around 'from_system' leaves a significant gap, making the tool less usable without further investigation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It explains 'circumference_mm' but does not elaborate on 'from_system'. The description's wording implies conversion to all four systems, but the parameter suggests a single output system. This ambiguity reduces usefulness beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts ring circumference in millimeters to four sizing systems (FR, US, UK, JP). This is a specific verb+resource combination and distinguishes it from siblings like 'calculate_ring_size_convert' which likely does a different conversion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to convert ring circumference to standard sizing systems. However, it does not explicitly state when to use this tool versus alternatives, nor does it explain the role of the 'from_system' parameter, which could cause confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_ring_size_convertAInspect

Convert ring sizes between EU, US, UK, and Japan systems. Use for international jewelry shopping. Inputs: size, from-system, to-system. Returns equivalent size. See list_bundles for related 'textile-mode' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeYesRing size in source system
from_systemYesSource sizing system

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It only states the conversion action but does not disclose behavioral traits such as output format, rounding rules, or whether multiple systems can be outputs. For a simple read-like tool, more detail on return values would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 11 words, extremely concise and to the point. Every word adds value, and it is front-loaded with the core purpose. No unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool with two parameters and no output schema, the description is minimal but could be more complete. It does not explain what the tool returns (e.g., a single converted value or all system equivalents). For a straightforward conversion, it is adequate but lacks completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no extra parameter meaning beyond the schema; it does not elaborate on how size and from_system interact or expected value ranges. However, the schema descriptions are sufficient for basic understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts ring sizes and lists the specific systems (FR, US, UK, EU, JP). The verb 'convert' and noun 'ring size' are precise, and the list of systems distinguishes it from other conversion tools among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing ring size conversion between the listed systems, but it provides no explicit guidance on when not to use it or alternatives. No exclusion criteria or context about prerequisites are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_roiCInspect

Compute Return on Investment as a percentage. Use to evaluate investments, marketing spend, or projects. Inputs: investment cost, return value. Returns ROI %, profit, and multiple. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
investmentYesInitial investment amount
return_valueYesFinal value or total returns

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It does not disclose the ROI formula (e.g., (return_value - investment) / investment * 100), error conditions (e.g., zero investment), or any side effects. The behavior is opaque.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is extremely concise and front-loaded with the essential action. However, it may be too brief, lacking structure and additional context that could be included without sacrificing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature of ROI calculation and complete schema, the description is minimally adequate. However, the absence of an output schema and any usage context means the agent must infer expected outputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond what the schema already provides for 'investment' and 'return_value'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'Return on Investment', indicating a specific financial computation. However, it does not distinguish from many other calculate_* sibling tools, which reduces clarity on unique purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., calculate_break_even, calculate_profit_margin). The description lacks any context about appropriate scenarios or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_roman_numeralCInspect

Convert between Roman numerals and decimal (1-3999). Returns: {roman_numeral, decimal}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valueYesDecimal number to convert to Roman numeral

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only says 'Convert', without specifying that it is read-only, what the output format is, or any other behavioral details. The lack of information limits transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but lacks structure. It could be improved by separating the two conversion directions or adding context, but it is not overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool and lack of output schema, the description should clarify the conversion direction, output format, and any restrictions. It is incomplete and leaves the agent to infer behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema coverage is 100% for the single parameter, the description implies support for both directions (Roman to decimal and decimal to Roman), but the schema only defines an integer input. This inconsistency undermines parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Convert between Roman numerals and decimal (1-3999)', suggesting bidirectional conversion. However, the input schema only accepts an integer (decimal number), which implies only decimal-to-Roman conversion. This ambiguity reduces clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or comparison with sibling tools like other conversion calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_roof_areaBInspect

Calculate roof surface area from building footprint and slope angle. Returns: {rafter_length_m, net_roof_area_m2, with_5pct_overhang_m2}. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
base_width_mYesBuilding width in meters
base_length_mYesBuilding length in meters
slope_degreesYesRoof slope in degrees

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description must fully disclose behavior. It only states the calculation purpose but does not mention output units, assumptions (e.g., simple gable roof assumptions), or limitations (e.g., complex roof shapes). The agent cannot infer the return format or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that conveys the core purpose without unnecessary words. It is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks critical context: it does not specify the output (presumably area in square meters), and there is no output schema to compensate. Additionally, edge cases like flat roofs (slope=0) or invalid inputs are not addressed. For a tool with 3 required parameters and no output schema, more detail is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% description coverage for all three parameters. The description adds minimal context by referencing 'building footprint' and 'slope angle', which maps to the parameters. Since schema coverage is high, the baseline is 3, and the description does not add significant new meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Calculate roof surface area from building footprint and slope angle', clearly indicating the specific verb and resource. It distinguishes itself from siblings like 'calculate_area' (general area) and 'calculate_roof_truss' (structural calculations).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Given the large number of sibling calculate tools, the description does not help select the correct one for roof area problems.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_roof_trussCInspect

Calculate roof truss dimensions, rafter length and material quantities for a pitched roof. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
span_mYesTotal roof span in meters (full width)
load_kg_m2NoTotal roof load in kg/m² including snow, wind and tiles (default 150)
spacing_cmNoDistance between trusses/rafters in cm (default 60cm)
pitch_degreesYesRoof pitch angle in degrees

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description only says 'calculate', which is insufficient to disclose behavioral traits like input validation, error handling, or whether it is side-effect-free. The description does not add behavioral context beyond the purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that directly states the tool's function without unnecessary words. It is front-loaded with key actions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions expected outputs (dimensions, rafter length, material quantities), which is helpful. However, it lacks details on return format, units, or any edge cases, making it moderately complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all parameters. The description adds no additional meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates roof truss dimensions, rafter length, and material quantities for a pitched roof, which is specific. However, it does not explicitly differentiate from sibling tools like calculate_roof_area or calculate_beam_load.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description is generic and does not mention specific use cases or conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rule_of_72BInspect

Estimate years to double an investment using the Rule of 72. Returns: {doubling_years_rule72, doubling_years_precise, annual_rate_pct}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_rateYesAnnual return rate percent

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The word 'Estimate' hints at approximation, but no details on accuracy, rate range (e.g., 6-10%), or handling of edge cases. No annotations provided to supplement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 10 words, front-loaded with key action and purpose. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the basic purpose, but lacks depth on usage context and behavioral caveats.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers 100% of parameters with description. The tool description adds no further meaning beyond the schema's 'Annual return rate percent'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it estimates years to double an investment using the Rule of 72. The verb 'Estimate' and resource 'years to double' are specific, and the method is named. This distinguishes it from many sibling financial calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like exact compound interest calculators. No mention of approximation context or rate ranges.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_rule_of_threeCInspect

Solve rule of three (cross-multiplication): if a→b, then c→? Use for proportions, recipe scaling, or unit pricing. Inputs: a, b, c. Returns x. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesKnown value A
bYesCorresponding value B
xYesNew value of A

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose any behavioral traits beyond the basic operation. With no annotations, the description should mention edge cases, constraints (e.g., division by zero), or result interpretation. It only describes the mathematical concept without operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence). It is front-loaded with the key purpose. While it could be more detailed, it efficiently communicates the core function without extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 parameters, no output schema), the description provides minimal context. It does not explain the formula used, the relationship between parameters, or the expected output. A more complete description would clarify that it computes y = (b * x) / a.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers all three parameters with clear descriptions, achieving 100% coverage. The description adds no additional semantics beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: solving rule of three/cross multiplication. It is specific and distinguishes from other calculator siblings by its unique mathematical operation. However, it could be more explicit about what the tool computes (e.g., the fourth proportional).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other calculation tools. There are many sibling calculators (e.g., calculate_percentage, calculate_ratio_simplify), and the description lacks any context about when cross multiplication is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_running_paceAInspect

Calculate running pace (min/km) and speed (km/h) from distance and time. Returns: {pace_min_per_km, pace_formatted}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
distance_kmYesDistance in kilometers
time_minutesYesTotal time in minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should disclose any behavioral traits. It implies a read-only computation, but lacks details on side effects, authorization needs, or return behavior. However, the tool is straightforward, so this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and to the point, containing no unnecessary words. It could be improved with slight structuring but serves its purpose well.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions output metrics (pace and speed) but does not specify the output format or structure. For a simple calculator, this may suffice, but additional detail would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameters are well-documented. The description adds output unit context ('min/km' and 'km/h') but does not add meaning beyond the schema's parameter descriptions. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates running pace (min/km) and speed (km/h) from distance and time, specifying units and distinguishing it from sibling tools like calculate_swimming_pace or calculate_marathon_splits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusion conditions. The purpose is clear from the name, but no usage context is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_salary_comparison_pppBInspect

Compare salaries across countries using PPP (FR=0.79, US=1.0, UK=0.81, DE=0.77, CH=1.36, BE=0.80). Returns: {ppp_from, ppp_to, equivalent, ratio}. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
salaryYesSalary in local currency
to_countryYesTarget country
from_countryYesSource country

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It adds context by listing specific PPP factors (e.g., FR=0.79), indicating the conversion method. However, it does not explain the output format, whether the result is in source or target currency, or any side effects (e.g., if it only adjusts for PPP without currency conversion).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and includes necessary PPP factors. Every part adds value, with no redundant or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description should clarify the return value (e.g., adjusted salary, comparison ratio, or both). It does not specify what the tool returns, leaving ambiguity about the result's format and units. This is a significant gap for a 3-parameter tool with no other structural aids.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter description coverage, so the baseline is 3. The tool description adds the PPP context but does not improve meaning for individual parameters beyond what the schema already provides (e.g., 'Salary in local currency', 'Source country').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare salaries across countries using PPP'. It specifies a concrete verb ('Compare') and resource ('salaries across countries'), and introduces a unique method (PPP) with specific factor values, distinguishing it from other salary or currency tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool vs alternatives, such as calculate_currency_exchange or calculate_purchasing_power. There is no mention of prerequisites, limitations, or scenarios where other tools would be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_salary_hourly_to_annualAInspect

Convert hourly wage to monthly and annual salary, gross or net. Use for job comparisons. Inputs: hourly rate, hours/week, weeks/year. Returns weekly, monthly, and annual figures. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
hourly_rateYesHourly rate
hours_per_weekNoHours worked per week
weeks_per_yearNoWeeks worked per year

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the conversion behavior but does not mention that it uses default values for hours_per_week (35) and weeks_per_year (52), nor does it describe any side effects (none expected). The description is adequate but lacks depth; it could state that the tool performs a simple calculation with no persistent state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence front-loading the core purpose. Every word is informative with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and full schema coverage, the description is mostly complete for a conversion tool. However, it does not specify the output format (e.g., an object with annual, monthly, daily values) despite lacking an output schema. This is a minor gap for a tool that generates multiple results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter has a description (e.g., 'Hourly rate', 'Hours worked per week'). The description adds minimal value beyond the schema—it implies 'hourly_rate' is the primary input but does not explain the relationship between parameters (e.g., 'annual = hourly * hours * weeks'). Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Convert' and clearly identifies the resource transformation: hourly rate to annual, monthly, and daily salary. It distinguishes this tool from other salary-related siblings like 'calculate_belgian_salary' by focusing on conversion rather than full calculation with taxes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., 'calculate_belgian_salary' or 'calculate_salary_comparison_ppp'). There are no exclusions or context for typical usage scenarios, leaving the agent to infer from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sample_sizeCInspect

Compute required sample size for a survey to hit a target margin of error. Use for survey design and A/B testing. Inputs: population, confidence %, margin of error %. Returns minimum sample size. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
confidenceNoConfidence level95
populationNoPopulation size
margin_error_pctYesMargin of error %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It does not disclose any behavioral traits (e.g., write vs read, side effects, rate limits). The description is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (5 words) but is a noun phrase without a verb. It lacks structure and could be improved with a clear subject-verb-object form.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 parameters, no output schema, and no annotations, the description is incomplete. It does not explain the output or any important context like default behavior or interpretation of results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Required sample size for a survey' indicates the tool calculates sample size but lacks a verb (e.g., 'Calculate') and does not distinguish it from other calculate tools. It is functional but vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. The sibling list is extensive, but the description offers no context on when this tool is appropriate or when to choose another.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_savings_goalBInspect

Compute time and monthly contribution needed to reach a savings target. Use for goal-based personal finance. Inputs: target amount, current savings, monthly contribution, annual return %. Returns months to goal. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_rateYesAnnual return rate percent
target_amountYesSavings target EUR
monthly_savingsYesMonthly savings EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should cover behavioral traits. It fails to disclose assumptions (e.g., monthly compounding, constant rate, no fees) or the output format (months/years). This is a significant gap for a financial calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence, no fluff. It gets straight to the point, though it might be too brief at the expense of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should provide more context about return values and assumptions. Without that, the tool specification is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions (target amount, monthly savings, annual rate). The description adds no extra meaning beyond the schema, which is adequate but not enhanced.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the verb 'Calculate' and the resource 'time needed to reach a savings target'. It distinguishes itself from sibling financial calculators like calculate_future_value or calculate_compound_interest, which compute different quantities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., for future value calculations). The description does not mention any when-not or contextual cues.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_scholarship_comparisonBInspect

Compare net tuition cost across multiple scholarships and aid packages. Use for college choice. Inputs: list of {tuition, aid} pairs. Returns ranked net costs and best option. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
tuitionYesAnnual tuition EUR
scholarship_1NoScholarship 1 amount EUR
scholarship_2NoScholarship 2 amount EUR
scholarship_3NoScholarship 3 amount EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description merely hints at comparing costs but does not explain calculation logic, edge cases, or what net tuition means. Lacks depth beyond the minimal purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise and front-loaded, but lacks any structural elements like bullet points or sections. Adequate for a simple tool but not exemplary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple calculation, description could still specify what the output is (e.g., net cost) or handle defaults. The minimal text leaves ambiguity about the exact comparison result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all parameters with descriptions, yielding 100% coverage. Description adds context that parameters involve costs and scholarships, but does not elaborate beyond 'net tuition costs'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it compares net tuition costs after scholarships, which is a specific verb and resource. It distinguishes itself from sibling calculate_ tools by focusing on scholarship comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like other financial calculators. No exclusions or prerequisites provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sci_is_vs_irBInspect

Compare SCI taxation under IS (corporate tax) vs IR (income tax). Use for French real-estate investors choosing tax regime. Inputs: rental income, charges, owner tax bracket. Returns net result under each regime. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_rentYesAnnual gross rental income in EUR
annual_chargesYesAnnual deductible charges in EUR (management fees, interest, maintenance)
property_valueYesProperty value for amortization calculation under IS
marginal_tax_rate_pctYesShareholder marginal income tax rate in percent (e.g. 30, 41, 45)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosing behavior. It implies a read-only calculation (comparison) but does not explicitly state side effects, authorization needs, or output format. The description is minimally adequate but lacks detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that immediately conveys the tool's purpose. It is concise with no unnecessary words, and the key action ('compare') and subject ('SCI taxation') are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (4 parameters, no output schema, no annotations), the description is too minimal. It fails to explain domain-specific acronyms (SCI, IR, IS), the nature of the comparison, or what output format the agent should expect. This leaves the agent guessing about the tool's input requirements and result interpretation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add any additional meaning beyond the parameter descriptions already present in the schema. It remains neutral, not adding nor detracting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: comparing SCI taxation under IR and IS regimes to find the most advantageous option. The verb 'compare' and the resource 'SCI taxation under IR vs IS' are specific and unambiguous. Among the sibling tools, none explicitly cover this exact scenario, so it distinguishes itself effectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. It neither states prerequisites nor indicates when not to use it. Given the many sibling tax calculators, an agent would benefit from explicit pointers to differentiate this tool from others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_seed_quantityBInspect

Calculate the number of seeds needed based on surface area, spacing and germination rate. See list_bundles for related 'jardinage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
surface_m2YesSurface area in square meters
row_spacing_cmYesDistance between rows in centimeters
plant_spacing_cmYesDistance between plants in a row in centimeters
germination_rate_pctNoGermination rate in percent (default 85%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as side effects, authorization needs, or limitations. It only states the basic function, leaving the agent uninformed about the tool's operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the essential information without any wasted words. It is optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculator with 4 parameters and no output schema, the description covers the core function but omits details such as the output format, underlying formula, or assumptions (e.g., rectangular planting grid). Given the simplicity, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already explains each parameter. The description adds no new meaning beyond what is already in the schema, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates the number of seeds needed based on surface area, spacing, and germination rate. It uses a specific verb and resource, but does not differentiate from sibling tools like 'calculate_lawn_seed' which may serve a similar purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for calculating seed quantity for planting, but it provides no guidance on when to use this tool versus alternatives like 'calculate_lawn_seed' or 'calculate_garden_soil'. No exclusions or context are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_senegalese_cssBInspect

Calculate Senegalese social contributions (CSS/IPRES) for employee and employer. Returns: {gross_monthly_xof, employee, employer}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
accident_rate_pctNoWork accident insurance rate 1-5% (employer only, default 3%)
gross_monthly_xofYesGross monthly salary in XOF

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose key behaviors but does not. It omits what the output looks like, whether it returns separate employee/employer amounts, any assumptions about rates, or side effects. The one-sentence description is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, clear sentence that is front-loaded and contains no unnecessary words. Efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, yet the description does not explain the return format (e.g., does it return separate employee and employer contributions? total? breakdown?). Without this, the agent cannot predict the output. The description is incomplete for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond what the schema already provides for gross_monthly_xof and accident_rate_pct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Senegalese social contributions (CSS/IPRES) for both employee and employer, distinguishing it from sibling tools like calculate_senegalese_income_tax or calculate_senegalese_vat.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For example, it does not clarify if users should use this for both CSS and IPRES together or if separate tools exist. Implied usage only.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_senegalese_income_taxBInspect

Calculate Senegalese income tax (IRPP) using DGI progressive brackets in XOF. Returns: {annual_income_xof, income_tax_xof, effective_rate_pct, marginal_rate_pct, brackets}. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_income_xofYesAnnual gross income in CFA Francs (XOF)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits. It only states the calculation function but does not confirm safe, read-only, or non-destructive behavior. No details on side effects, permissions, or data handling are given, leaving the agent insufficiently informed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that quickly conveys the essential information. There are no redundant words, and it is structured to front-load the topic 'Calculate Senegalese income tax (IRPP)' for immediate understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with one parameter and no output schema. The description provides the core function and method but does not mention the tax year, possible deductions, or the format of the result (e.g., tax amount, rate). While minimal, it covers the basics adequately for a simple calculator.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides a description for the single parameter ('Annual gross income in CFA Francs (XOF)') with a minimum constraint. The tool description adds context about 'using DGI progressive brackets' but does not explain the parameter beyond the schema. With 100% schema coverage, the baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Calculate', the specific resource 'Senegalese income tax (IRPP)', the method 'using DGI progressive brackets', and the currency 'XOF'. It uniquely identifies the tool among many sibling country-specific tax calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. There is no mention of prerequisites, limitations, or when not to use it. The context of sibling tools implies usage for Senegal, but the description itself lacks such guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_senegalese_vatCInspect

Compute Senegalese VAT (TVA) — 18% standard rate. Use for invoicing in Senegal. Inputs: amount, mode (ht/ttc). Returns HT, TTC, tax amount. See list_bundles for related 'finance-afrique-quebec' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoInput mode: ht=hors taxe, ttc=toutes taxes comprisesht
rateNoTVA rate in % (standard 18%)
amountYesAmount in XOF

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It only states the calculation type and rate, but omits key details such as output format (returns VAT amounts, HT/TTC?), rounding behavior, or that it requires XOF currency. The schema indicates mode and amount requirements, but the description adds no behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, very concise. It conveys the core purpose without extra words. However, it could be better structured to include more details without becoming verbose, but for a simple tool it is appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain what the tool returns (e.g., an object with VAT amount, HT, TTC). It does not. Additionally, it lacks context about the currency (XOF) and the meaning of mode. For a tool with three parameters and no output schema, this is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for all three parameters (mode, rate, amount), so the description's mention of 'standard 18% or specified rate' adds no new meaning beyond what the schema already provides. The description does not explain the mode parameter (ht/ttc) beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Senegalese VAT (TVA) and mentions the standard rate, distinguishing it from generic or other country-specific VAT calculators. However, it could be more explicit about the currency (XOF) and that it is for Senegal only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like calculate_vat_generic or other country-specific VAT calculators. There are no explicit use cases or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sequenceBInspect

Calculate nth term and sum of arithmetic or geometric sequence. Returns: {common_difference, nth_term, sum_of_n}. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
nYesNumber of terms
typeYesSequence type
commonYesCommon difference (arithmetic) or ratio (geometric)
first_termYesFirst term (a1)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description only states the calculation purpose, with no information about side effects, authorization needs, limitations, or behavior beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single concise sentence that efficiently communicates core functionality without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While adequate for a simple calculator, the description lacks details on what exactly is returned (e.g., both nth term and sum? only one?) and could be more informative for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear parameter descriptions. The description adds context about what is calculated (nth term and sum), which slightly enhances understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies verb 'calculate' and resources 'nth term and sum' for arithmetic or geometric sequences. It is clear but does not differentiate from other similar calculation tools in the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not mention any prerequisites or context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_severance_payBInspect

Calculate French severance pay for rupture conventionnelle or licenciement. Returns: {monthly_salary, years_first_10, years_above_10, gross_indemnite, note}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthly_salaryYesReference gross monthly salary in euros
years_seniorityYesYears of seniority in the company

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It only states 'Calculate', which suggests a read-only computation, but does not explicitly disclose whether it modifies data, requires authentication, or what the output format is. The transparency is limited.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that effectively communicates the core purpose. It is front-loaded with the verb and object, but could include more detail without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks information about return values or additional context such as legal assumptions or required inputs beyond salary and seniority. Given the absence of an output schema, more detail on output would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with descriptions. The tool description does not add additional meaning beyond what the schema provides. With high schema coverage, the baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates French severance pay for two specific legal scenarios, rupture conventionnelle or licenciement. This provides a specific verb and resource, and distinguishes it from the many other calculate tools available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for French severance calculations but offers no explicit guidance on when to use this tool versus alternatives, nor does it mention exclusions or prerequisites. Usage is implied by context but not clarified.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_shipping_volumetricDInspect

Compute volumetric (dimensional) weight for shipping. Carriers bill the higher of actual and dim weight. Inputs: L×W×H cm, divisor (5000 air, 6000 ground). Returns dim weight kg. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
width_cmYesWidth cm
actual_kgYesActual weight kg
height_cmYesHeight cm
length_cmYesLength cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosing behavioral traits. It fails to mention any side effects, return format, or assumptions (e.g., dimensional weight divisor). The tool could be performing a simple calculation, but the description gives no behavioral insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (3 words) but lacks necessary structure. It does not front-load a verb or purpose, and the brevity comes at the cost of clarity and completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description should explain what the tool returns (e.g., 'Returns the volumetric weight in kg'). It provides no such information, making it incomplete for an agent to anticipate the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with clear descriptions (e.g., 'Width cm'). While the description adds no additional meaning beyond the schema, the baseline is 3 given high schema coverage. However, it does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Volumetric weight for shipping' is a noun phrase that essentially restates the tool name. It does not explicitly state the action (e.g., 'Calculate the volumetric weight'), which makes it a tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the many sibling calculate_* tools. There is no mention of prerequisites or exclusions, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_shoe_sizeCInspect

Convert shoe size between EU, US, UK, and Japanese systems. Use for international shopping. Inputs: size, from-system, to-system, gender. Returns equivalent. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeYesShoe size
from_systemYesFrom system

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must shoulder the burden. It fails to disclose key behaviors: whether it converts to all systems or requires a target, output format, or any limitations. The minimal description leaves the agent guessing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise, but at the expense of essential details. While brevity is valued, it omits critical information for correct tool invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and the large number of sibling conversion tools, the description is insufficient. It should specify the conversion direction (e.g., from one system to all others) and the output format to be usable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents the two parameters. The description adds no additional meaning beyond what is in the schema, earning a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'convert' and resource 'shoe sizes', indicating the tool's core purpose. However, it does not differentiate from sibling tools like 'calculate_shoe_size_convert' and 'convert_shoe_size', which share similar function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives or what conversion direction it performs (e.g., from a single system to all others or to a specific target). The description lacks any context for appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_shoe_size_convertBInspect

Convert shoe size between EU, US (M/W) and UK systems. Returns: {converted_size, eu_size, original_size}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeYesShoe size in source system
to_systemYesTarget sizing system
from_systemYesSource sizing system

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden for behavioral disclosure. It only states the conversion action without revealing any constraints, edge cases, or side effects. For example, it doesn't mention that size input must be positive (though schema enforces it). Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, well-structured sentence that conveys the core functionality without any unnecessary words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (3 plain parameters, no output schema), the description is mostly sufficient. However, it could mention that US men's and women's sizes are separated (US_M vs US_W) and that conversion is not possible for half sizes if not supported, but the enum values make these clear. Adequate for a straightforward conversion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already covers all parameters with clear descriptions (e.g., 'Shoe size in source system') and enum values for systems. The description adds no extra semantic value beyond what the schema provides. Schema coverage is 100%, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and the resource 'shoe size', and specifies the systems (EU, US M/W, UK). It distinguishes the tool from siblings like 'calculate_shoe_size' and 'convert_shoe_size' by naming the exact systems involved.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidelines are provided about when to use this tool versus alternatives (e.g., convert_shoe_size or calculate_shoe_size). There is no mention of prerequisites, limitations, or context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_simple_interestBInspect

Compute simple interest I=P·r·t. Use for short-term loans, basic savings accounts, and homework. Returns interest amount and final balance. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsYesDuration in years
principalYesInitial amount
annual_rateYesAnnual interest rate in %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully convey behavioral traits. However, it only states the formula and does not disclose any side effects, permissions, or output format. It implies a safe calculation, but not explicitly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the core purpose and formula. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not mention what the tool returns (likely the interest amount) or any formatting details. Without an output schema, the agent lacks information on how to interpret the result. More context is needed for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for each parameter. The description adds the formula but does not clarify whether the rate is in decimal or percentage form, potentially causing confusion. However, the schema explicitly states 'in %', so the description adds minimal value beyond that.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates simple interest and provides the formula I = P*r*t. It is specific and distinguishes itself from sibling tools like calculate_compound_interest.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs alternatives. There are related tools like compound interest calculators, but no differentiation or context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sleep_cyclesCInspect

Estimate optimal bedtime or wake time based on 90-min sleep cycles. Use for sleep optimization. Inputs: target wake or bedtime. Returns 4-6 cycle recommendations. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bedtimeYesBedtime HH:MM
wake_timeYesWake time HH:MM

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It only states 'analyze sleep quality' without disclosing behavioral traits like calculation method, return values, or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise but overly brief. It lacks structure and could benefit from additional context without being verbose. Efficiency is acceptable but not optimal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is incomplete. It does not explain what 'analyze sleep quality' entails or what the output will be (e.g., cycles, duration, phases). The agent lacks information to interpret the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add any additional meaning beyond the schema's parameter descriptions ('Bedtime HH:MM', 'Wake time HH:MM'). No extra constraints or format details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes sleep quality from bedtime and wake time, using a specific verb and resource. It distinguishes itself from sibling tools by focusing on sleep, but could be more precise about the output (e.g., sleep cycles).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not mention context, prerequisites, or exclusions, leaving the agent to infer usage from the name and schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_slopeCInspect

Compute slope as percentage, angle in degrees, and ratio. Use for ramps, roofs, or terrain analysis. Inputs: rise, run. Returns slope% and angle°. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
run_mYesHorizontal run m
rise_mYesVertical rise m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must carry the behavioral burden. It fails to explain output structure (e.g., whether results are returned as separate values or combined), unit assumptions, or edge case handling (e.g., negative rise). Minimal behavioral disclosure beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence that directly conveys the tool's main function. It is front-loaded and avoids extraneous information, though it could be slightly more informative without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple but lacks any output schema. The description should clarify how results are returned (e.g., as separate fields or a combined string) and mention unit consistency. Without this, the agent may misinterpret the response format. Many sibling tools exist, but no contextual differentiation is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with both parameters having clear descriptions ('Horizontal run m' and 'Vertical rise m'). The description adds no additional semantic value beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates slope and specifies three output formats (%, degrees, ratio). The verb 'calculate' and resource 'slope' are explicit. However, it does not differentiate from sibling tool 'calculate_drain_slope' which is closely related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, typical contexts, or exclusions. With many related calculate_ tools, this omission hampers agent decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_smoking_savingsCInspect

Compute money and health time saved by quitting smoking. Use for motivation and budgeting. Inputs: cigarettes/day, pack price. Returns daily/monthly/yearly savings and life-time recovered. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
pack_priceYesPrice per pack
cigarettes_per_dayYesCigarettes smoked per day
cigarettes_per_packNoCigarettes per pack

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It only states the calculation purpose but does not disclose what the output represents (e.g., daily, monthly, yearly savings), whether it assumes no inflation, or any other behavioral traits like whether it returns a single number or an object. This lack of transparency could lead to confusion.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no unnecessary words. It is efficient but could be slightly improved by adding a time frame or output hint without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description should clarify what the tool returns. It does not. For a simple calculation, the return format is critical. Additionally, no behavioral notes are provided. Given the tool's simplicity, it is incomplete for an AI agent to understand the exact output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers all parameters with descriptions, achieving 100% coverage. The tool description adds no extra meaning beyond the schema. According to the guidelines, since schema_description_coverage is high, baseline is 3. No points added or deducted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: calculating money saved by quitting smoking. It uses a specific verb and resource, but lacks differentiation from many sibling calculator tools (e.g., calculate_led_savings, calculate_car_lease_vs_buy) which are also savings calculations. However, the verb 'calculate' and resource 'money saved by quitting smoking' are unambiguous enough for a basic understanding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not specify when to use this tool over others, nor does it mention prerequisites or limitations. For example, it doesn't state that it assumes constant pack price or that it calculates daily savings. Without this, an AI agent may misuse it or choose an inappropriate sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_soil_ph_amendmentCInspect

Compute lime or sulfur amount to shift soil pH to a target. Use for gardening. Inputs: current pH, target pH, area m². Returns amendment type and kg/m². See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
area_m2YesGarden area m²
target_phYesTarget pH
current_phYesCurrent soil pH

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral transparency, but it only restates the name. It does not disclose whether the tool assumes a soil buffer capacity, returns units (kg, lbs), or has any limitations, leaving the agent uninformed about critical behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Although the description is a single sentence, it is too brief and lacks front-loading of essential details. It does not efficiently convey the tool's operation or output, making it underspecified rather than effectively concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters, no output schema, and no annotations, the description is incomplete. It fails to explain what the tool returns, under what assumptions it operates, or how the calculation is performed, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for parameter descriptions, so the schema already documents each parameter's meaning. The description adds no additional value beyond the schema, thus scoring at the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Soil pH amendment calculator' indicates a tool for calculating soil pH amendment amounts, which is clear but lacks specificity. It does not state what output is produced (e.g., lime or sulfur quantity), leaving some ambiguity about the exact purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like calculate_ph or calculate_compost_volume. There are no prerequisites, exclusions, or alternatives mentioned, making it hard for an agent to decide when to invoke this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_solar_panel_outputBInspect

Estimate solar panel daily and yearly energy output by location and system size. Use for solar installation sizing. Inputs: kW capacity, latitude, panel orientation, shading %. Returns kWh/day and kWh/year. See list_bundles for related 'energie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
area_m2NoPanel surface area in m2 (optional, informational)
panel_watt_peakYesTotal peak power of the installation in Watts (Wp)
hours_sun_per_dayNoAverage peak sun hours per day (default 4)
efficiency_loss_pctNoSystem efficiency loss percentage (default 15%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavior (e.g., pure calculation, no side effects, output format). It only says 'estimate energy output', lacking any details on whether it queries external data or returns structured results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. However, it lacks structure (e.g., bullet points) that could improve scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description should specify return values (e.g., kWh). It mentions daily and annual output but omits units and format. Adequate for basic understanding but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so baseline is 3. Description adds context (daily/annual output) but doesn't explain parameter relationships or formulas beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates daily and annual energy output of a solar panel installation, using a specific verb and resource. This distinguishes it from sibling tools like calculate_solar_roi which focus on financial returns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., calculate_solar_roi). No mention of prerequisites, limitations, or context for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_solar_roiCInspect

Compute solar panel return on investment over their lifetime. Use for energy audit. Inputs: install cost, kW capacity, location production, kWh price. Returns ROI years and lifetime savings. See list_bundles for related 'energie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
price_kwhNoElectricity price EUR/kWh
annual_kwhYesAnnual production kWh
system_costYesTotal system cost EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosure but fails to mention any behavioral traits. It does not state that the tool performs a calculation, what the result represents (e.g., a percentage or currency amount), or any assumptions like using a fixed interest rate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one phrase) but lacks substance. It is not verbose, yet its brevity comes at the cost of omitting necessary context. A balance of conciseness and informativeness would improve it.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema or annotations, the description should clarify the return type and any key assumptions. The current description leaves the agent uncertain about what the result represents (e.g., a ratio, percentage, or net profit) and how to interpret the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds no extra meaning beyond the schema fields. The schema already describes each parameter (e.g., 'Electricity price EUR/kWh'), so the description's lack of additional detail is acceptable but does not exceed expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Solar panel return on investment' clearly identifies the tool's purpose: to calculate ROI for solar panel systems. It uses a specific verb-resource pair and distinguishes from sibling tools like 'calculate_solar_panel_output' by focusing on financial return rather than technical output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it does not mention that this tool is for financial analysis versus production estimation, leaving the agent to infer from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_solution_dilutionCInspect

Compute lab solution dilution C1·V1=C2·V2. Use for stock-to-working solution prep. Inputs: stock concentration, target concentration, target volume. Returns stock volume and diluent volume. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
c1YesInitial concentration mol/L
c2YesTarget concentration mol/L
v1_mlYesInitial volume mL

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It only mentions the formula but not what the tool actually does (e.g., computes final volume v2), no mention of output format, units, or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely short single sentence, but lacks necessary detail. It is under-specified rather than concisely complete.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and no description of what the tool returns. For a calculation tool, the output (e.g., v2 value) is critical. Description is incomplete for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for each parameter (concentration units, volume units). Description adds no extra meaning beyond the formula, but schema already does the job.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description 'Dilution calculation C1V1=C2V2' essentially restates the tool name without specifying what it computes or returns. It does not clarify whether it calculates a missing variable or what that variable is.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus the many sibling tools (e.g., calculate_dilution). No context for prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_speed_distance_timeAInspect

Solve speed/distance/time — provide any 2 of 3 values to compute the missing one. Returns: {error}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
speedNoSpeed in km/h
distanceNoDistance in kilometers
time_minutesNoTime in minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It accurately describes the computation behavior. However, it does not detail what happens if fewer or more than two values are given, but for a simple calculation tool, the implication is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-crafted sentence that conveys all essential information without wasted words. It is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is no output schema, the description could be more complete by mentioning the return value (the missing value with unit). However, for a straightforward calculation tool, the current description is largely sufficient for an agent to infer the behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with clear parameter descriptions (speed in km/h, distance in km, time in minutes). The description adds value by explaining the relationship ('provide any 2 of 3 values to compute the missing one'), which enhances understanding beyond individual parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('solve', 'compute') and clearly identifies the resource ('speed/distance/time'). It explicitly states the input requirement (any 2 of 3 values) and output (missing one). This differentiates it from sibling tools like calculate_speed_of_sound or calculate_distance_2d.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use the tool: when the agent has two of the three values (speed, distance, time) and needs the third. No explicit alternatives are given, but given the specificity of speed/distance/time, the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_speed_of_soundBInspect

Compute speed of sound in air at a given temperature. Use for physics or audio engineering. Formula: c=331.3+0.606·T_C. Inputs: temperature °C. Returns speed in m/s. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
temperature_cYesCelsius

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It does not disclose output units, formula, assumptions, or side effects. For a pure calculation tool, more behavioral context (e.g., returns m/s) is needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. It is appropriately concise for a simple tool, though it could benefit from additional context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema or description of return value. No mention of valid temperature range or formula. While the tool is simple, an AI agent would need more context to use it confidently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter name 'temperature_c' and description 'Celsius'. The description adds no extra meaning beyond the schema, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly identifies the tool as calculating the speed of sound in air at a given temperature, using a specific verb and resource. It distinguishes itself from sibling tools like 'calculate_angle_convert' or 'calculate_anything'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor any exclusion criteria. The description lacks context for selecting this tool over other calculation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sphereAInspect

Compute sphere volume V=(4/3)πr³ and surface area A=4πr². Use for ball, tank, or astronomy problems. Inputs: radius. Returns volume and area. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
radiusYesRadius

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the tool calculates volume and surface area, but does not specify return format, precision, or unit assumptions. For a simple math tool, basic behavior is conveyed, but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at five words, front-loading the tool's purpose. Every word is necessary and there is no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description adequately conveys its function. However, adding 'returns both volume and surface area' would slightly improve completeness for agents without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the radius parameter described as 'Radius'. The description adds no additional meaning, such as units or acceptable ranges, beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Sphere volume and surface area', clearly indicating the tool calculates both properties of a sphere. It distinguishes from sibling tools like 'calculate_cylinder' and 'calculate_cone' by specifying the shape and outputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies usage for sphere calculations, but does not mention exclusions or recommend sibling tools for other shapes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_spring_constantCInspect

Compute spring constant k from Hooke's law F=k·x. Use for physics or mechanical design. Inputs: force N, displacement m. Returns spring constant N/m. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
force_nYesApplied force N
displacement_mYesDisplacement m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without any annotations, the description must fully convey behavioral traits. It does not mention output units (N/m), any assumptions (e.g., linear elasticity), or constraints beyond the schema's minimum displacement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (4 words) but omits essential information such as output and formula context. It is under-specified rather than efficiently concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the tool's simplicity, the description lacks key details like the output (spring constant k), the formula F = kx, and any assumptions. This makes it incomplete for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% coverage with descriptions for both parameters. The description adds no additional meaning beyond what the schema already states, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Spring constant from Hooke's law' clearly indicates the tool calculates the spring constant using Hooke's law, with a specific verb and resource. It distinguishes from sibling tools by its function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, limitations, or comparison to other calculate tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_staircaseCInspect

Calculate staircase dimensions using Blondel formula. Returns: {step_height_cm, giron_cm, blondel, blondel_ok}. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
total_height_cmYesTotal height cm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only reveals the formula used. It does not disclose the output format, assumptions, or any safety implications, leaving significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that quickly conveys the core purpose. It is efficient but could include more information without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should explain return values. It neither describes the computed dimensions nor differentiates from similar siblings, leaving the tool's full behavior unclear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a basic description for the single parameter. The tool description adds only the formula context, not further semantic detail beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates staircase dimensions using the Blondel formula, specifying both the resource and method. However, it does not differentiate from the sibling tool 'calculate_concrete_stairs', which may also involve staircase calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_concrete_stairs'. It lacks any context about prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_staking_rewardsCInspect

Calculate staking rewards with optional compounding for a given APY and duration. Returns: {initial_amount}. See list_bundles for related 'crypto' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesInitial staking amount in coins or fiat
apy_pctYesAnnual Percentage Yield in percent
compoundingYesCompounding frequency
duration_daysYesStaking duration in days

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully convey behavior. It states the tool calculates rewards but does not disclose whether it is a read-only operation, the return format (e.g., number or object), or the underlying formula assumptions (e.g., compounding periods per year). This lack of transparency could lead to misinterpretation by an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 11 words that is direct and front-loaded. Every word earns its place; there is no fluff. It efficiently communicates the tool's purpose and key input parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (4 params, no output schema), the description is insufficient. It does not specify the return format or the exact calculation method, which is crucial for an AI agent to interpret results. With many sibling tools, more detail is needed to ensure correct usage. The lack of output schema amplifies this deficiency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All four parameters have descriptions in the input schema (100% coverage), so the schema already conveys meaning. The tool description adds minimal extra value, such as the word 'optional' for compounding, which mirrors the schema. No additional examples, units, or constraints are provided beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates staking rewards with optional compounding for a given APY and duration. It specifies the key inputs (APY, duration) and distinguishes itself from siblings like calculate_compound_interest by focusing on staking. However, it could be more precise about the exact output (e.g., total rewards vs final balance).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lacks any guidance on when to use this tool versus sibling calculators such as calculate_compound_interest or calculate_crypto_profit_loss. No context is given for prerequisites, limitations, or alternatives, which is a significant gap given the large number of similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_stamp_duty_ukCInspect

Compute UK Stamp Duty Land Tax (SDLT). Use for UK home buyers. Inputs: property price, first-time-buyer flag, second-home flag. Returns SDLT due and effective rate. See list_bundles for related 'finance-uk' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
priceYesProperty purchase price in GBP
first_time_buyerNoWhether buyer is a first-time buyer (default false)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavior beyond 'Calculate'. It omits key details like tax rates, surcharges, or that it's for England/Northern Ireland, leaving the agent uninformed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (6 words, 1 sentence). While efficient, it sacrifices necessary detail. No structural issues.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and lack of output schema, the description is too thin. It does not explain what the output represents or what the calculation entails, leaving the agent with incomplete context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions, but the description adds no additional meaning or context beyond the schema. Baseline of 3 is not earned due to lack of enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates UK Stamp Duty Land Tax (SDLT), with a specific verb and resource. It distinctively identifies the tool among many siblings like calculate_uk_income_tax.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other UK tax calculators). The description simply states the function without context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_star_magnitude_distanceCInspect

Calculate star distance from apparent and absolute magnitude. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
absolute_magnitudeYesAbsolute magnitude (M)
apparent_magnitudeYesApparent magnitude (m)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description should disclose behavioral traits. It does not mention what the distance unit is (likely parsecs), nor does it describe the formula or any side effects. The agent gets minimal insight beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that is front-loaded with the tool's core purpose. It is appropriately sized, though it could include additional useful information without becoming wordy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description should explain the return value (e.g., distance in parsecs) and possibly the formula. It lacks this crucial information, leaving the agent to guess the output format. Given the tool's simplicity, this is a notable gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, with parameters already described as 'Absolute magnitude (M)' and 'Apparent magnitude (m)'. The description adds no extra meaning or context beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates star distance from apparent and absolute magnitude, which is a specific verb+resource combination. However, it does not explicitly differentiate from sibling tools, though the specificity helps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other calculation tools. There is no mention of prerequisites, exclusions, or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_statisticsBInspect

Calculate descriptive statistics: mean, median, mode, std dev, quartiles. Returns: {count, std_deviation, min, max, range, iqr}. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
valuesYesArray of numbers

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must fully disclose behavior. It reveals that the tool calculates multiple statistics but does not mention the return format (e.g., object with named fields) or any constraints (e.g., minimum sample size). This lack of detail limits transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, immediately stating the action and scope. It is front-loaded and contains no extraneous content. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one array parameter) and the absence of an output schema, the description is adequate but incomplete. It fails to specify the return structure, which an agent would need to interpret results correctly. With many sibling tools providing similar computations, more context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the parameter 'values' with 100% coverage. The description adds no additional semantic information about parameters (e.g., ordering, units, or handling of edge cases). Baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'descriptive statistics', listing specific statistics (mean, median, mode, std dev, quartiles). This distinguishes it from sibling tools like calculate_average which compute only a single statistic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternative tools. Given the large number of sibling tools (e.g., calculate_average, calculate_percentile_rank), explicit usage context is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_string_tensionBInspect

Calculate guitar or bass string tension in pounds, kilograms and Newtons. See list_bundles for related 'musique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
frequency_hzYesTarget tuning frequency in Hz (e.g. 329.63 for E4)
gauge_inchesYesString gauge in inches (e.g. 0.010 for a light gauge high E)
scale_length_inchesYesInstrument scale length in inches (e.g. 25.5 for Fender Stratocaster)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It only mentions the calculation and units but does not disclose output format, behavior on invalid inputs, or any side effects. The scalar output format and whether all units are returned is unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no unnecessary words. All information is front-loaded and efficiently presented.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema, so description should specify return format. It mentions units but not how results are returned (e.g., single value? all three?). Given tool simplicity, it is mostly adequate but missing some contextual details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for all three parameters. The description adds value by specifying output units but does not enhance parameter meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Calculate guitar or bass string tension' with specific units (pounds, kilograms, Newtons). It distinguishes well from the many other calculate_* tools by specifying the exact resource and instrument context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool or alternatives. It simply states what it does without context about prerequisites, suitable applications, or comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_student_loan_repaymentCInspect

Compute student loan repayment schedule and total interest. Use for graduates planning repayment. Inputs: loan amount, interest rate %, term years. Returns monthly payment, total paid, total interest. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_rateYesAnnual interest rate percent
loan_amountYesLoan amount EUR
monthly_paymentYesMonthly payment EUR

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits but only says 'calculate... schedule'. It fails to mention assumptions (e.g., fixed interest rate, currency), what the output contains (e.g., list of payments, total interest), or any side effects. This is insufficient for a safe and accurate invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, making it concise. However, it lacks any structural elements like bullet points or paragraphs to aid scanning. For a tool with three required parameters, it is minimal but not overly verbose, earning a middle score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain the return format (e.g., schedule details, table). It omits critical context like currency (EUR) implied by the schema, assumptions, or edge cases. The tool is not self-contained for an agent to understand its complete behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all three parameters have descriptions in the schema). The tool description adds no extra context or constraints beyond what's in the schema. Baseline score of 3 is appropriate as the description does not enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'student loan repayment schedule', making the tool's purpose evident. However, it does not distinguish from sibling tools like 'calculate_loan_payment' or 'calculate_us_student_loan', missing an opportunity to clarify uniqueness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'calculate_loan_payment' or 'calculate_us_student_loan'. There are no context hints, when-not-to-use advice, or references to other tools, leaving the agent to infer usage without support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_study_scheduleBInspect

Generate a study schedule based on exam date and topics. Returns: {total_hours_needed, daily_hours_needed, feasible}. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
exam_dateYesExam date YYYY-MM-DD
topics_countYesNumber of topics to study
hours_per_topicYesHours needed per topic

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It does not mention any side effects, authentication needs, rate limits, or data mutation. The tool likely has no destructive effects, but this is not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no extraneous information. It is concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and the description does not indicate what the schedule looks like (e.g., list of study sessions, date ranges). For a generate tool, the output format is critical. The description is incomplete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters. The description adds no additional meaning beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a study schedule based on exam date and topics. It uses a specific verb ('Generate') and identifies the resource ('study schedule'). Among the many sibling tools, this purpose is distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. There is no mention of prerequisites, typical use cases, or when not to use it. The description is purely declarative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sun_exposureBInspect

Calculate safe sun exposure time based on UV index and Fitzpatrick skin type. Returns: {skin_description, safe_exposure_minutes, with_spf30_minutes, uv_risk_level, recommendations}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
uv_indexYesUV index at destination (1–11+)
skin_typeYesFitzpatrick skin type: 1=very fair, 6=very dark

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility. It does not disclose limitations (e.g., ignores cloud cover, altitude), what the output represents, or any side effects. The description merely states the task without behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with 12 words, making it concise and front-loaded. However, it is arguably too brief and could benefit from slight expansion to improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two fully described parameters and no output schema, the description is moderately complete. It lacks information about the output format or any assumptions, but given the low complexity, it is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters already described (uv_index: number 1-20, skin_type: enum with values 1-6). The description adds no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: calculating safe sun exposure time using UV index and skin type. It uses a specific verb ('calculate') and resource ('safe sun exposure time'), and it is distinct from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool or when not to. There is no mention of prerequisites, alternatives (like calculate_sunscreen_reapply), or context for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sunrise_approxCInspect

Estimate sunrise/sunset times for a latitude on a given day of year. Use for astronomy or outdoor planning. Inputs: latitude, day of year. Returns sunrise/sunset hours, daylight duration. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude
day_of_yearYesDay of year (1-366)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It only states 'approximate', without detailing limitations (e.g., precision, whether it accounts for atmospheric refraction, altitude), side effects, or any computational constraints. This is insufficient for behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but lacks detail. It is appropriately front-loaded but fails to include necessary information that should be present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should explain what the tool returns (e.g., sunrise and sunset times in hours/minutes, or a JSON object). It also does not mention units or how results are structured, leaving agents guessing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions ('Latitude', 'Day of year (1-366)'). The description adds no additional semantics beyond the schema. Baseline 3 is appropriate as the schema already documents parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Approximate sunrise/sunset times' clearly states the tool's purpose (calculating sunrise/sunset times) and indicates approximate nature, matching the tool name. However, it does not differentiate from the sibling tool 'calculate_sunrise_sunset', which likely provides exact times.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this approximate calculation versus alternatives like 'calculate_sunrise_sunset'. The description lacks context on trade-offs (e.g., accuracy vs. speed).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sunrise_sunsetCInspect

Approximate sunrise and sunset times based on latitude and day of year. Returns: {sunrise_solar_time, sunset_solar_time, day_length_hours}. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYesLatitude in degrees
day_of_yearYesDay of year (1-365)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description bears the full burden of disclosing behavioral traits. It mentions 'approximate' but fails to detail edge cases (e.g., polar regions, timezone handling) or the nature of the approximation. This is insufficient for a tool affecting time-sensitive data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded with the action and resource. It is concise with no extraneous information, though it could be slightly longer to include output format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and the description does not specify what exactly is returned (e.g., times in UTC, local, or a tuple). The lack of detail on output format and accuracy limits, especially given the 'approximate' qualifier, makes it incomplete for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with both parameters already described in the input schema. The tool description simply restates 'latitude' and 'day of year' without adding new constraints, context, or format details, meeting the baseline expectation but adding no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates approximate sunrise and sunset times based on latitude and day of year. The verb 'calculate' and the specific resource 'sunrise and sunset times' make the purpose clear, though it does not differentiate from the sibling tool 'calculate_sunrise_approx'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'calculate_sun_exposure' or 'calculate_sunrise_approx'. The description lacks any explicit usage context, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_sunscreen_reapplyCInspect

Compute when to reapply sunscreen based on SPF, activity, and water exposure. Use for sun safety. Inputs: SPF, skin type, activity (sweat/swim), UV index. Returns next reapply time. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
spfYesSPF factor
uv_indexYesCurrent UV index
skin_typeYesFitzpatrick skin type 1-6

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden but provides only a vague purpose, disclosing no behavioral traits such as output format, assumptions, or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence is concise but lacks structure. It front-loads the purpose but does not provide additional detail to justify its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a health-related calculation with three numeric inputs and no output schema, the description omits critical context such as output format, units, and assumptions, making it incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All three parameters have descriptions in the input schema, so the description adds no extra meaning. Baseline score of 3 applies since schema coverage is 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the action (calculate) and the specific outputs (sun protection duration and reapplication time), distinguishing it from the similar sibling calculate_sun_exposure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like calculate_sun_exposure, and no conditions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_surface_carrezBInspect

Calculate Carrez law surface area (French legal measurement). Returns: {carrez_surface_m2, total_surface_m2, excluded_m2, included_rooms, excluded_rooms, note}. See list_bundles for related 'immobilier' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
roomsYesList of rooms with area and ceiling height

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full responsibility for behavioral disclosure. It fails to mention key Carrez law specifics, such as the minimum ceiling height of 1.80m for inclusion, the exclusion of certain areas, or the output format. This lack of detail could lead to incorrect agent behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with 9 words, making it highly concise and front-loaded. However, it omits useful details like the ceiling height threshold, which could be added without hurting conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's specific legal context and the absence of an output schema or annotations, the description is incomplete. It does not explain Carrez law rules (e.g., which rooms to include, handling of low ceilings), nor does it describe the return value, leaving significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already provides adequate meaning for each parameter. The description adds no further semantic value beyond the overall purpose, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates 'Carrez law surface area', which is a specific French legal measurement. The verb 'calculate' combined with the distinct legal reference effectively distinguishes it from the many sibling calculate_* tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for French legal surface area calculations but provides no explicit guidance on when to use or avoid this tool, nor any mention of alternatives. Given the many sibling tools, clearer usage context would be beneficial.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_swimming_paceCInspect

Calculate swimming pace per 100m and SWOLF efficiency estimate. Returns: {pace_per_100m_min, pace_formatted, swolf_estimate}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
distance_mYesDistance swum in meters
time_minutesYesTotal swim time in minutes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. It only states what it calculates but does not disclose behaviors like being read-only, no side effects, or return structure. For a calculator, safety is implied but not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise and front-loaded. However, it could be slightly expanded to include output details without losing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple calculator with full schema coverage. However, no output schema or return description is provided. Lacks explanation of SWOLF formula or expected output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no extra meaning beyond the schema; 'per 100m' is inferable from the tool name. Baseline score is appropriate as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates swimming pace per 100m and SWOLF efficiency. It uses specific verb and resource, distinguishing from sibling tools like calculate_running_pace. However, it does not explicitly differentiate from similar sports calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. With many similar tools (e.g., calculate_running_pace, calculate_cycling_power), the description lacks context for selection. No mention of prerequisites or constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_swiss_income_taxCInspect

Calculate Swiss income tax — federal + estimated cantonal tax. Returns: {income, federal_tax, federal_marginal_rate_pct, cantonal_tax_estimate, cantonal_rate_pct, effective_rate_pct, ...}. See list_bundles for related 'finance-suisse' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cantonNoCanton of residencegeneve
incomeYesAnnual taxable income in CHF

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states the output is 'federal + estimated cantonal tax' without mentioning side effects, return format (e.g., number or object), or whether it requires any authentication. The description is too minimal given the lack of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, front-loaded with the verb 'Calculate' and key information. It is efficient and without superfluous words, earning a high score for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two parameters), no output schema is provided, and no annotations exist. The description does not specify what the tool returns (e.g., a single number or breakdown), nor does it mention limitations, assumptions, or accuracy. This leaves gaps for an agent to understand the full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with parameter descriptions for 'canton' and 'income'. The description adds the context of 'estimated cantonal tax' but does not elaborate on how the canton parameter affects the calculation beyond what the schema already implies. Baseline is 3, and the description provides marginal added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Swiss income tax, specifying federal and estimated cantonal components. It uses a specific verb and resource, distinguishing it from other tools like calculate_swiss_salary or calculate_swiss_vat. However, it could be more specific by indicating whether it applies to individuals or corporations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With many sibling tax calculators, the description lacks context on selecting this tool over others, such as calculate_belgian_income_tax or calculate_french_income_tax.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_swiss_lppCInspect

Calculate Swiss occupational pension (LPP / 2e pilier) contributions by age bracket. Returns: {eligible, note}. See list_bundles for related 'finance-suisse' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesAge of employee
gross_annualYesAnnual gross salary in CHF

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states the tool calculates contributions by age bracket but gives no details on internal logic (e.g., contribution rates, BVG thresholds, how age bracket mapping works), assumptions, or limitations. The output format is not described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no redundant information. It efficiently communicates the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should explain return values, edge cases, and assumptions. It does not specify what is returned (e.g., a number, breakdown, or array) or any constraints like minimum salary for LPP. The tool appears to be a simple calculation but lacks sufficient context for correct use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for 'age' (17-70) and 'gross_annual' (CHF, min 0). The description adds 'by age bracket', which relates to the age parameter but doesn't clarify if brackets are exact or ranges. No new parameter-level detail is provided beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Swiss occupational pension (LPP / 2e pilier) contributions by age bracket. It specifies the resource and action, and among sibling tools like calculate_swiss_income_tax and calculate_swiss_salary, it is distinct. However, it does not clarify whether contributions are employee, employer, or total, nor does it mention the age bracket granularity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., calculate_swiss_income_tax for income tax, calculate_swiss_salary for net salary calculations). It also lacks context on prerequisites, such as minimum salary or coordination deduction for LPP.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_swiss_pillar3aCInspect

Calculate Swiss pillar 3a tax savings (3e pilier lie). Returns: {annual_contribution, net_cost_after_saving, max_employee_2026, max_self_employed_2026}. See list_bundles for related 'finance-suisse' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
marginal_tax_rateYesMarginal income tax rate in % (federal + cantonal combined)
annual_contributionYesAnnual contribution to pillar 3a in CHF (max 7056 for employees, 35280 for self-employed)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but adds no behavioral context beyond the name. It does not disclose that this is a non-destructive calculation or explain behavior for invalid inputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no extraneous words. It efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not specify the output format (e.g., single number vs. breakdown), lacks details on assumptions or edge cases, and is too brief for a tool with no output schema or annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema provides 100% coverage with clear descriptions for both parameters (marginal tax rate and annual contribution). The tool description adds no additional meaning, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Swiss pillar 3a tax savings, with the French term '3e pilier lie' aiding understanding. It distinguishes from sibling tax calculators like 'calculate_swiss_income_tax' but does not explicitly differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor on prerequisites such as marginal tax rate knowledge or contribution limits. The description only states functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_swiss_salaryBInspect

Convert Swiss gross monthly salary to estimated net salary. Returns: {gross_monthly, avs_ai_apg_5_3pct, ac_chomage_1_1pct, lpp_2e_pilier_10pct, lamal_health_fixed, net_monthly, ...}. See list_bundles for related 'finance-suisse' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gross_monthlyYesGross monthly salary in CHF

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'estimated net salary,' hinting at approximation, but does not detail what deductions are considered, the accuracy, or the output format. Given the lack of output schema, this is insufficient for an agent to understand the tool's behavior fully.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that efficiently conveys the tool's purpose. No unnecessary words exist. However, it could be slightly longer to include additional helpful context without losing conciseness, hence 4 rather than 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description is minimally adequate. It states the conversion from gross to net but omits mention of assumptions, deductions included, or return value structure. A bit more detail would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single parameter is described in the schema). The description adds no further semantic value beyond what the schema already provides. Baseline 3 is appropriate since the schema already handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts Swiss gross monthly salary to estimated net salary. The verb 'Convert' and the resource 'Swiss gross monthly salary' are specific. Among many sibling calculators for other countries, this one uniquely handles Swiss salary, distinguishing it from others like calculate_belgian_salary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives no explicit guidance on when to use this tool versus alternatives such as calculate_swiss_income_tax. The context is implied only through the tool name and description; no when-to-use or when-not-to-use instructions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_swiss_vatBInspect

Compute Swiss VAT (TVA/MWST) — convert between net (HT) and gross (TTC). Use for invoicing or expense reimbursements in Switzerland. Inputs: amount, rate (8.1, 3.8, 2.6, 0). Returns HT, TTC, and tax amount. See list_bundles for related 'finance-suisse' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoInput mode: ht=before tax, ttc=after taxht
rateNoVAT rate: 2.6% (reduced), 3.8% (hotel), 8.1% (standard)8.1
amountYesAmount in CHF

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden of behavioral disclosure. It only states it's a calculator/converter, with no mention of side effects, permissions, or return behavior. This is minimal for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, efficient and to the point. It conveys the core purpose without extraneous text, though it could benefit from slight expansion for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator tool with high schema coverage, the description is adequate but lacks information about the return value (e.g., the converted amount). It does not explain the output format, which is a minor gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all parameters. The description adds the acronyms HT/TTC but does not significantly expand on the parameter meanings beyond what the schema provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates Swiss VAT and converts between HT and TTC, specifying the exact resource (Swiss VAT) and the action (convert). This distinguishes it from generic VAT tools and other country-specific calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the name and description (Swiss VAT), but no explicit guidance is given on when to use this versus alternatives like calculate_vat_generic or other country VAT calculators. No when-not or alternative mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_swiss_wealth_taxCInspect

Calculate Swiss wealth tax (impot sur la fortune) by canton. Returns: {net_wealth, tax_free_threshold, taxable_wealth, wealth_tax_rate_pct, annual_wealth_tax, note}. See list_bundles for related 'finance-suisse' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cantonNoCanton of residencegeneve
net_wealthYesNet wealth in CHF (assets minus debts)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the tool calculates wealth tax by canton. It fails to disclose that only four cantons are supported, that net wealth must be in CHF, or what the return value looks like. Essential behavioral details are missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single front-loaded sentence with no wasted words. It efficiently conveys the core purpose, though it could include a hint about supported cantons without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description is insufficient. It does not describe return format, limitations (only 4 cantons), or assumptions. For a calculator tool, this is a significant gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and both parameters are described. The description adds no meaning beyond the schema; 'by canton' is already implied by the canton parameter. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Swiss wealth tax by canton, using specific verb 'Calculate' and resource 'Swiss wealth tax (impot sur la fortune)'. It distinguishes from sibling tools like calculate_swiss_income_tax, though it does not explicitly list supported cantons.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. There is no mention of context, prerequisites, or when not to use it, leaving the agent to infer usage solely from the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_tdeeBInspect

Calculate Total Daily Energy Expenditure from BMR and activity level. Returns: {tdee_kcal}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bmrYesBasal Metabolic Rate in kcal
activity_levelYesActivity level

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as side effects, permissions, or limitations. It only states the calculation purpose, leaving the agent to infer safety and behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and front-loaded. Every word is necessary, and there is no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool is simple, the description lacks details about return values or calculation methodology. Without an output schema, the agent has no information about what the response contains. It is adequate but not complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes both parameters. The description adds no additional meaning beyond repeating 'BMR and activity level', which is already in the schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates Total Daily Energy Expenditure from BMR and activity level, specifying the resource (TDEE) and inputs. Among many sibling 'calculate_*' tools, this one is unambiguously distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. There is no context for usage beyond the basic function.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_telescope_magnificationCInspect

Compute telescope magnification, exit pupil, and field of view. Use for astronomy hobbyists. Inputs: telescope focal length, eyepiece focal length, eyepiece field of view. Returns magnification and exit pupil. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
eyepiece_mmYesEyepiece focal length mm
focal_length_mmYesTelescope focal length mm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the full burden of disclosing behavior. It only states a calculation is performed, with no mention of side effects, permissions, or output format. The agent is left to assume a safe read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, very concise with no redundant words. For a simple tool, this is appropriately sized and front-loaded, though it could benefit from slightly more detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema, the description should explain what 'useful limit' means or what the output looks like. It fails to provide this context, leaving the agent without a complete understanding of the tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the input schema already documents both parameters. The description adds no additional meaning beyond what the schema provides (eyepiece mm and focal length mm). Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates telescope magnification and useful limit, which is specific and distinct from the many sibling calculate tools. However, 'useful limit' is not precisely defined, so it could be clearer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given the large number of calculate tools, the agent would benefit from explicit usage context, but none is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_tile_groutCInspect

Compute grout quantity for a tiling job. Use for renovation budget. Inputs: surface m², tile size, joint width. Returns grout kg needed. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
area_m2YesArea m²
tile_cmYesTile size cm
depth_mmNoJoint depth mm
joint_mmNoJoint width mm

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. The description only states the purpose but does not mention that the tool is a read-only calculation, what side effects exist, or any other behavioral traits. It is insufficient for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (5 words) but not well-structured. It lacks a complete sentence and structure that would help an agent quickly understand the tool. While concise, it sacrifices clarity and completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (4 parameters, no output schema), the description is too minimal. It does not explain the calculation formula, units, or expected output. A more complete description would aid the agent in understanding how the tool works.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all 4 parameters (area_m2, tile_cm, depth_mm, joint_mm). The description adds no additional meaning beyond these schema descriptions, but the schema itself is adequate. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Grout quantity for tiling' clearly indicates that the tool calculates grout quantity for tiling projects. However, it does not differentiate from the sibling tool 'calculate_tile_quantity', which likely calculates tile quantity itself. The verb is implied but not explicitly stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. There is no indication of when to use this tool versus alternatives like calculate_tile_quantity. No prerequisites or when-not-to-use guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_tile_quantityCInspect

Compute tiles needed including a waste margin (default 10%). Use for floor or wall tiling. Inputs: surface m², tile size, waste %. Returns tile count and surface ordered. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
area_m2YesArea m²
tile_l_cmYesTile length cm
tile_w_cmYesTile width cm
waste_pctNoWaste %

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only says 'with waste margin' without explaining how waste is applied, whether the result is integer/float, or rounding behavior. Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 5 words, no filler. Perfectly concise and front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should explain what 'tiles needed' means (count? area? including waste?). It does not specify output format or calculation approach, making it incomplete for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with minimal descriptions. The description adds 'waste margin' context for waste_pct but does not elaborate on calculation details. Adequate but not enhanced beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states 'Calculate tiles needed with waste margin', clearly indicating the specific verb and resource. It distinguishes from siblings like calculate_tile_grout but does not explicitly differentiate from other similar tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., calculate_tile_grout, calculate_paint_quantity). No prerequisites or use cases mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_time_differenceCInspect

Compute the difference between two times or dates in seconds, minutes, hours, days. Use for project tracking, age, or scheduling. Inputs: start datetime, end datetime. Returns delta in multiple units. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
city1YesFirst city
city2YesSecond city

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states the general action. It does not disclose key behaviors such as whether DST is accounted for, the return format (hours/minutes), or if it uses current time or a specific date.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no unnecessary words. It efficiently communicates the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description fails to explain what the tool returns (e.g., hours, minutes, signed difference). It also omits context about time reference (current time vs. arbitrary date), leaving significant gaps for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions 'First city' and 'Second city'. The description adds no extra parameter information beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates time difference between two major cities. The verb and resource are specific, and the input schema with city enums distinguishes it from sibling tools like calculate_timezone_convert, though this differentiation is not explicitly stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There are no when-to-use or when-not-to-use instructions, and no mention of related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_time_signature_beatsBInspect

Calculate total beats and duration for a musical passage in bars. Returns: {time_signature}. See list_bundles for related 'musique' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bpmYesTempo in beats per minute
barsYesNumber of bars
beat_valueNoNote value of one beat (denominator of time signature, e.g. 4 for quarter note)
beats_per_barNoNumber of beats per bar (numerator of time signature)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavior beyond the schema. It only states 'calculate total beats and duration' without explaining output format, side effects (none), or that two parameters have defaults. The description is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single clear sentence that is front-loaded with the verb and resource. It is concise but could benefit from a brief additional detail about parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters and no output schema or annotations, the description is too brief. It does not explain return values (e.g., beats and duration), note optional parameters, or provide context for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have descriptions in the schema (100% coverage). The description adds no additional parameter-specific meaning beyond the schema, so it meets the baseline but does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates total beats and duration for a musical passage, using specific musical terms. It is distinct from sibling tools like calculate_bpm_to_ms or calculate_frequency_note, focusing on passage-level timing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other time or music calculators). The description does not mention use cases, prerequisites, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_timezone_convertCInspect

Convert a time between two UTC offsets accounting for date rollover. Use for international meetings. Inputs: time, from-utc, to-utc. Returns local time and date offset. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
timeYesTime to convert HH:MM
to_offsetYesTarget UTC offset hours
from_offsetYesSource UTC offset hours (e.g. 1 for UTC+1)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden but only says 'Convert time between UTC offsets'. It does not disclose behavior like handling of invalid inputs, output format, or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise single sentence, but could include more information without becoming verbose. It is front-loaded but under-specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 required parameters and no output schema, the description is too minimal. It does not explain the return value, edge cases, or how the conversion works, which is insufficient for a tool with many siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts time between UTC offsets, distinguishing it from siblings like calculate_time_difference or calculate_timezone_offset. However, it could be more explicit about inputting a time with a source offset and outputting with a target offset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like calculate_time_zone_difference or calculate_timezone_offset. No exclusions or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_time_zone_differenceBInspect

Compute the time difference (hours) between two timezones. Use for international meetings. Inputs: timezone A, timezone B. Returns delta hours and current local times. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
city1YesFirst city
city2YesSecond city

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states it calculates hour difference and current local time but does not specify how it handles daylight saving time, which city's local time is returned, or that it uses a fixed city list. The behavioral disclosure is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise but lacking structure. It covers the basic purpose but omits necessary details like output format. It earns its place but is not optimally organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should clearly state what the tool returns. It mentions 'current local time' but is ambiguous about which city's time. Considering the complexity and sibling tools, the description is incomplete for an agent to reliably use the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema provides 100% coverage with enum descriptions, but they are minimal ('First city', 'Second city'). The description adds context that the output will include hour difference and local time, which goes beyond the schema. However, it does not explain the role or expected format of the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Calculate hour difference between two major cities and current local time'. The verb 'Calculate' and the resource (hour difference and local time) are specific, and the input schema with two city enums distinguishes it from siblings like calculate_time_difference or calculate_timezone_offset.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like calculate_time_difference or calculate_timezone_offset. There is no mention of prerequisites, exclusions, or context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_timezone_offsetAInspect

Compute current UTC offset for a timezone, accounting for DST. Use for scheduling and date math. Inputs: timezone (IANA name). Returns UTC offset and DST status. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_zoneYesTarget time zone
from_zoneYesSource time zone

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates it returns an hour difference but does not specify whether the result is signed or absolute, nor how daylight saving time is handled. With no annotations, more detail would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, front-loaded sentence contains all necessary information with no redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two enum parameters and no output schema, the description is adequate but could be improved by mentioning the return format or sign convention.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with enums and descriptions; the description adds 'standard time zones' but does not enhance understanding beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates the hour difference between two standard time zones, which is specific and distinct from sibling tools like calculate_timezone_convert.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus similar siblings like calculate_time_difference or calculate_timezone_convert, nor any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_tipCInspect

Compute restaurant tip and per-person split. Use for shared meals. Inputs: bill, tip %, people count. Returns tip, total, per-person share. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
billYesBill amount
splitNoNumber of people splitting
tip_pctNoTip percentage

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description only says 'Calculate tip amount and split between people' with no details on rounding, currency, or other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One succinct sentence, no wasted words. But could include more useful info without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and no annotations; description lacks details on return format, rounding, or edge cases. Incomplete for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3. Description adds no extra meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it calculates tip and split. But doesn't distinguish from siblings like 'calculate_tip_split' and 'calculate_tip_worldwide'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_tip_split' or 'calculate_tip_worldwide'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_tip_splitBInspect

Calculate tip and per-person amount for a restaurant bill. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
tip_pctYesTip percentage
num_peopleYesNumber of people splitting
bill_amountYesTotal bill amount

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only describes the calculation, omitting details like rounding, currency handling, or that it is read-only. For a simple calculator, this is acceptable but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded and contains no superfluous words. It is appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a straightforward calculator with clear parameter names and no output schema, the description is complete enough. The lack of annotations is mitigated by the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description does not add extra meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates tip and per-person amount for a restaurant bill. However, it does not differentiate from siblings like 'calculate_tip' or 'calculate_tip_worldwide' which may have similar functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. The description only states what it does without specifying context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_tip_worldwideCInspect

Compute restaurant tip following local custom (US 18-22%, FR included, JP no tip, etc.). Use when traveling. Inputs: bill, country. Returns recommended tip and total. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
billYesBill
countryYesCountry

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not disclose any behavioral traits, such as how the tip is computed (e.g., percentage ranges per country), whether tax is included, or what the output represents (tip amount vs. total). With no annotations, the description fails to provide necessary transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At four words, the description is overly concise and sacrifices clarity for brevity. While it is front-loaded, it omits critical details, making it insufficiently informative for an MCP tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It does not specify return values, calculation logic, or edge cases (e.g., zero bill, unsupported country). More context is needed for an agent to use this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters ('Bill' and 'Country' with enum). The description adds minimal value beyond what the schema states, but the schema itself is clear. Baseline 3 is appropriate as the description does not enhance meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Tip by country customs' vaguely implies calculating tips based on country customs but lacks a clear verb (e.g., 'calculate') and does not explicitly state the tool's function. It fails to distinguish from sibling tools like 'calculate_tip' and 'calculate_tip_split'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'calculate_tip' or 'calculate_tip_split'. There is no mention of scenarios where this tool is preferred or not.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_torusCInspect

Compute torus volume V=2π²Rr² and surface area. Use for ring-shaped objects (donuts, inner tubes). Inputs: major radius R, minor radius r. Returns volume and area. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
major_rYesMajor radius (center to tube center)
minor_rYesMinor radius (tube radius)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states what is computed, not how results are returned (e.g., single value vs. object) or any constraints beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one noun phrase) but lacks structure. It front-loads the key information but could be improved with a full sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation with two numeric parameters and no output schema, the description fails to specify the return format (e.g., object with volume and surface_area). This leaves ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter names and descriptions. The tool description adds no extra meaning beyond what the schema provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes 'Torus volume and surface area', which is specific and distinguishes it from siblings by naming the shape. However, it lacks an explicit verb like 'calculates'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus the many other calculation tools. The description provides no context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_training_zones_runningAInspect

Calculate 6 running training zones as speed ranges based on VMA. Returns: {vma_kmh}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
vmaYesVMA (Maximal Aerobic Speed) in km/h

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description only states that it calculates zones based on VMA. It does not disclose behavioral traits such as the exact zone definitions, any assumptions, or the return format, leaving the agent with limited understanding of the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise, front-loaded sentence that efficiently communicates the tool's purpose without any redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is adequate but lacks information about the return value, such as how the zones are represented (e.g., as speed ranges). The absence of output schema increases the need for descriptive completeness, which is partially unmet.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema fully describes the single parameter (vma) with a minimum value and description. The tool description adds no additional meaning beyond stating that the paramet er is used to calculate zones, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool calculates 6 running training zones as speed ranges based on VMA. It specifies the verb 'calculate', the resource 'running training zones', and the method 'based on VMA', effectively distinguishing it from sibling tools like calculate_heart_rate_zones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when VMA is known and speed zones for running are needed, but it does not explicitly state when to use this tool versus alternatives, nor does it provide when-not-to-use guidance or exclude other scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_travel_budgetBInspect

Estimate total trip budget by category (transport, accommodation, food, activities). Use for trip planning. Inputs: destination, days, traveler count, comfort level. Returns total and per-day breakdown. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysYesNumber of travel days
travelersYesNumber of travelers
destinationYesDestination region

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears the full burden of behavioral disclosure. It only states it 'estimates total travel budget' without detailing what the estimate includes (e.g., accommodation, food, transport, currency), accuracy, or whether it's per-person or total. This lacks sufficient transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the tool's core function. It contains no filler or redundant information, making it appropriately concise for its simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should provide more context about the output (e.g., currency, per-person vs. total, what costs included). It leaves significant gaps for an agent that needs to understand what the tool returns and how to interpret the estimate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description mentions the three parameters (destination, days, travelers) but adds no meaning beyond the schema's own descriptions. It simply restates them without providing additional context like budget ranges or calculation methodology.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates a total travel budget based on destination, duration, and number of travelers. It uses a specific verb ('estimate') and resource ('travel budget'), and differentiates from the many sibling calculate tools by its unique focus on travel budgeting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for estimating travel budgets but provides no explicit guidance on when to use this tool versus alternatives (e.g., calculate_travel_insurance). No exclusionary context or when-not-to-use advice is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_travel_insuranceCInspect

Calculate estimated travel insurance cost based on destination, duration, age and activities. Returns: {base_per_day_eur, activity_factor, estimated_premium_eur, coverage_tips}. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesTraveler age in years
activitiesYesActivity level: standard (city/beach), adventure (hiking/skiing), extreme (mountaineering/motorsport)
destinationYesTravel destination zone
duration_daysYesTrip duration in days

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description only says 'calculate estimated cost' without disclosing the nature of the operation (e.g., read-only, return format, or potential limitations). Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no fluff, front-loaded with the action. It is concise and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks details about output format, margin of error, or assumptions. With no output schema, the description should provide more context about what the tool returns, but it does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with detailed parameter descriptions. The tool description merely lists the parameter names without adding extra meaning, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates estimated travel insurance cost based on four factors. However, it does not differentiate from a sibling tool with a nearly identical name (calculate_travel_insurance_estimate), which is a minor gap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, such as the similarly named sibling. No prerequisites, contexts, or exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_travel_insurance_estimateCInspect

Estimate travel insurance cost based on trip cost, age, duration, destination. Use for trip planning. Inputs: trip cost, traveler age, days, region. Returns premium estimate. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesTraveler age
daysYesTrip duration days
destinationYesDestination region
coverage_levelYesCoverage level

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It does not disclose behavioral traits such as whether the estimate is approximate, what factors are considered, or the format of the result. The description is too minimal to inform the agent about what happens when the tool is called.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence and very short, but it is under-specified. Conciseness is achieved at the expense of necessary information, making it insufficient for an agent to use the tool effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's 4 required parameters, no output schema, and numerous sibling tools, the description is incomplete. It fails to explain the calculation basis, returned value, or how it relates to similar tools like 'calculate_travel_insurance'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter description coverage, so the schema already documents each parameter. The description adds no additional meaning or relationships between parameters. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Estimate travel insurance cost' clearly states the verb (estimate) and resource (travel insurance cost), but it does not distinguish this tool from the sibling 'calculate_travel_insurance', leaving ambiguity about the difference. The purpose is clear but not fully specified.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'calculate_travel_insurance' or other travel calculators. There is no mention of scenarios, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_triangle_heronBInspect

Compute triangle area from three side lengths using Heron's formula. Use when angles aren't known. Inputs: sides a, b, c. Returns area, perimeter, type (equilateral/isoceles/scalene). See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
aYesSide a
bYesSide b
cYesSide c

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It only states the formula used but does not mention whether it validates inputs (e.g., ensuring sides form a triangle), handles errors, returns area units, or any precision details. The tool's simplicity mitigates the need for extensive disclosure, but key behavioral traits are absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only six words, with no redundant information. It front-loads the core purpose effectively. Every word earns its place, as it immediately conveys the tool's function and method.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple mathematical function with a clear name and schema, the minimal description may suffice. However, it lacks explicit mention of return value or error handling. Given the low complexity and the presence of no output schema, a higher score would require more detail, but this is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions like 'Side a' which are clear. The description adds no extra meaning beyond the schema. Given high coverage, a baseline score of 3 is appropriate, as the description does not enhance understanding of the parameters further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes triangle area using Heron's formula. The verb 'calculate' is implicit from the tool name, and the resource (triangle area) is specific. While it does not explicitly distinguish from sibling tools, the mention of Heron's formula sets it apart from generic area calculators like 'calculate_area'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description does not mention prerequisites, valid input conditions (e.g., triangle inequality), or scenarios where other tools might be more appropriate. With many sibling calculators, this omission could lead to incorrect tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_trigonometryBInspect

Compute sin, cos, tan and inverse functions in degrees or radians. Use for geometry, physics, navigation. Inputs: function, value, unit. Returns result and reciprocal. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
funcYesTrig function
unitNoInput angle unitdegrees
valueYesInput value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden for behavioral disclosure. It only states that it calculates trig functions but does not mention input validation, return format, error handling, or unit default behavior (e.g., degrees default). This is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that immediately conveys the tool's purpose with no wasted words. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the schema's high coverage, the description is minimally adequate but lacks details on return values, error behavior, and the unit default. It is functional but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents parameters. The description adds marginal value by listing the enum values for 'func', but these are already in the schema. Thus, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'trigonometric functions' with a list of specific functions (sin, cos, tan, asin, acos, atan). This distinguishes it effectively from the long list of sibling calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. It simply states what it does without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_uk_council_taxAInspect

Compute UK Council Tax based on property band and local authority rate. Use for UK residents and home buyers. Inputs: band (A-H), local authority. Returns annual and monthly council tax. See list_bundles for related 'finance-uk' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
bandYesCouncil Tax band (A=lowest, H=highest)
regionNoRegionengland

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description uses 'Estimate' to indicate the tool provides an approximation, and specifies the tax year (2025/26). With no annotations, the description carries the transparency burden; it adequately discloses the basic nature but could mention data sources or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with 10 words, no unnecessary information, and is front-loaded with the key purpose. It is optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two enum params, no output schema), the description covers the essential purpose and scope (year). Some additional context about the output (e.g., annual estimate) would improve completeness, but it is adequate for a straightforward estimation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides 100% coverage with enums and descriptions for both parameters. The description does not add additional semantics beyond what the schema offers, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates UK Council Tax by band and region for 2025/26. It uses a specific verb-resource combination and distinguishes from sibling tax tools. However, it does not specify the output format (e.g., annual amount), which slightly limits clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for council tax estimation but provides no explicit guidance on when to use this tool versus alternatives like other UK tax tools. No exclusions or comparisons are given, only an implied context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_uk_income_taxBInspect

Calculate UK income tax for 2025/26 using HMRC progressive brackets with personal allowance taper. Returns: {gross_income, personal_allowance, taxable_income, income_tax, effective_rate_pct, marginal_rate_pct}. See list_bundles for related 'finance-uk' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
incomeYesAnnual gross income in GBP

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states the calculation method, omitting details about output format, side effects, or constraints (e.g., return type, handling of edge cases like negative income).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that immediately states the purpose and method. It is well front-loaded but could potentially include a brief note on output without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema), the description covers the core purpose. However, it lacks information on return values, assumptions (e.g., taxable income only), or special cases, which limits completeness for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes the single 'income' parameter with constraints and description. The description echoes 'annual gross income in GBP' but adds no new semantic value beyond what the schema provides, earning a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the action (calculate), target (UK income tax for 2025/26), and method (HMRC progressive brackets with personal allowance taper). It effectively distinguishes from sibling tax calculators for other countries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternative tax calculators (e.g., for different years or jurisdictions). No prerequisites or limitations are mentioned, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_uk_ni_contributionsAInspect

Calculate UK National Insurance contributions (Class 1 employee) for 2025/26. Returns: {annual_salary, ni_annual, ni_monthly, effective_rate_pct}. See list_bundles for related 'finance-uk' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
annual_salaryYesAnnual gross salary in GBP

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden, but it only states it calculates contributions. It does not disclose behavioral traits such as whether it returns annual or monthly amounts, includes employer contributions, or accounts for thresholds. The minimal description fails to add transparency beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that conveys the core purpose. It is front-loaded with the key information. However, it could be slightly expanded to include important details like the return format without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is largely adequate. It states the purpose and tax year. However, it could be more complete by indicating the output (e.g., 'Returns the annual employee NI contribution amount') to fully inform the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (annual_salary is described as 'Annual gross salary in GBP'). The description does not add any extra meaning about the parameter beyond what the schema already provides. Baseline score of 3 is appropriate as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates UK National Insurance contributions for Class 1 employees for a specific tax year (2025/26). It uses a specific verb-resource combination and distinguishes itself from numerous sibling tools like calculate_uk_income_tax and calculate_uk_vat by focusing on NI contributions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly provide guidance on when to use this tool versus alternatives. While it implies it's for employee NI contributions, it lacks explicit exclusions or mentions of other NI classes (e.g., Class 2/4) or scenarios like self-employment. No alternatives are named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_uk_student_loanCInspect

Calculate UK student loan repayments based on plan type and salary. Returns: {annual_salary, repayment_rate_pct, annual_repayment, monthly_repayment}. See list_bundles for related 'finance-uk' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
planNoStudent loan plan: 1, 2, 4, 5, or postgrad2
annual_salaryYesAnnual gross salary in GBP

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden. It only mentions calculation without disclosing any behavioral traits, such as assumptions about repayment thresholds or whether results are monthly or annual.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no wasted words. It is front-loaded and efficient, though could benefit from slight expansion.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

There is no output schema, and the description does not explain what the return value represents (e.g., monthly or annual repayment). For a financial calculation, this is a notable lack of completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters, and the description adds no additional meaning beyond stating 'plan type' and 'salary'. The schema already documents the enum and salary details effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates UK student loan repayments based on plan type and salary, which is a specific verb and resource. However, it does not differentiate from the sibling tool 'calculate_student_loan_repayment', which may have similar functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description simply states what it does without any context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_uk_vatBInspect

Calculate UK VAT — convert between net (ex-VAT) and gross (inc-VAT) amounts. Returns: {amount_net, amount_gross, vat_amount, vat_rate_pct}. See list_bundles for related 'finance-uk' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoInput mode: ht=net (ex-VAT), ttc=gross (inc-VAT)ht
rateNoVAT rate: 0% (zero), 5% (reduced), 20% (standard)20
amountYesAmount in GBP

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It only states the conversion purpose but does not disclose any behavioral traits such as rounding behavior, validation rules, or limits. For a simple calculator, minimal disclosure is a gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key information, no fluff. Efficient and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculator tool with full schema coverage and no output schema, the description is adequate. It could mention that amount is in GBP (already in schema) but otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters (amount, mode, rate) with descriptions and defaults. The description adds no additional context beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and resource 'UK VAT', specifying the conversion between net and gross amounts. It distinguishes itself from similar VAT tools (e.g., calculate_french_vat) by being UK-specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like calculate_vat_generic or other country-specific VAT tools. The description does not mention scenarios or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_unemployment_benefitBInspect

Estimate French unemployment benefit (ARE — Aide au Retour a l'Emploi). Returns: {daily_ref_salary, daily_are, monthly_are_estimate, min_daily, max_daily_75pct_sjr}. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
daily_ref_salaryYesSalaire Journalier de Reference (SJR) in euros — typically last 12 months salary / 261

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description only states 'estimate,' which is minimal. It does not disclose any behavioral traits like the basis of the estimation, any assumptions, or limitations. For an estimation tool, more context is expected.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence. It is concise and to the point. While structure is minimal, it is appropriate for a simple tool. Could be slightly improved with a hint of output, but current form is not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description fails to mention what the tool returns (e.g., estimated monthly benefit in euros). It also does not note any limitations or applicability ranges. For a simple tool, this omission reduces completeness significantly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already provides a detailed description of the single parameter (daily_ref_salary), covering its meaning and calculation reference. The tool description adds no extra meaning, and with 100% schema coverage, a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: estimating French unemployment benefit (ARE). The verb 'estimate' is appropriate and specific, and the acronym provides further clarity. Among many calculate_* tools, this one is uniquely identified.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs. alternatives, or prerequisites for using ARE. The description lacks context such as eligibility or typical scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_unit_priceBInspect

Compare unit prices across packages to find the best deal. Use for shopping. Inputs: list of {price, quantity, unit}. Returns price per unit and best buy. See list_bundles for related 'education' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
itemsYesItems to compare

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It does not mention how unit comparisons handle different units, whether input items are sorted, or what the output format is. For a tool that likely requires unit conversion or normalization, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words. It is front-loaded with the core action. However, it may be too concise, lacking details that could improve usability without adding excessive length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description fails to explain what the tool returns (e.g., best deal, sorted list, all comparisons). It also omits edge cases like mismatched units, empty input, or invalid data. The user cannot fully anticipate tool behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and description of the 'items' array and its properties is complete. However, the description adds no extra meaning beyond the schema—it simply restates 'compare unit prices' without elaborating on how each parameter (name, price, quantity, unit) is used or validated.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: comparing unit prices to find the best deal. It uses a specific verb ('compare') and identifies the resource ('unit prices') and intended outcome, distinguishing it from other calculation tools in the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites, limitations, or when not to use it. Among many sibling calculation tools, this omission makes it harder for an agent to select appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_401kBInspect

Calculate US 401(k) contribution, employer match, and total retirement savings. Returns: {annual_salary, employee_contribution, employer_match, total_annual_contribution, catch_up_eligible, max_employee_limit}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageNoEmployee age (50+ enables catch-up contributions)
annual_salaryYesAnnual salary in USD
contribution_pctYesEmployee contribution percentage (1-100)
employer_match_pctNoEmployer match percentage of employee contribution (default 50%)
employer_match_limitNoEmployer match cap as % of salary (default 6%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fails to disclose behavioral traits such as whether it is read-only, side effects, or output format. It only states what it calculates, not how it behaves or what constraints apply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, clear sentence of 12 words that efficiently conveys the tool's purpose with no unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite good schema coverage, the description lacks context about output format, handling of 401(k) contribution limits, or edge cases like catch-up contributions. The absence of an output schema exacerbates this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already well-described. The description adds no additional meaning beyond the schema, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Calculate US 401(k) contribution, employer match, and total retirement savings' with a specific verb and resource, distinguishing it from siblings like general retirement calculators or other financial tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It does not mention any context or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_capital_gainsAInspect

Calculate US capital gains tax — short-term (ordinary rates) or long-term (preferential rates). Returns: {tax, gain_type}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sale_priceYesSale price in USD
filing_statusNoFiling status for rate thresholdssingle
purchase_priceYesOriginal purchase price in USD
holding_period_monthsYesHolding period in months

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses the key behavioral distinction between short-term and long-term rates based on holding period. However, it does not explain the specific tax rates or how they are applied, leaving some behavioral ambiguity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose. Every word is meaningful, with no repetition or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with rich schema (100% coverage) and no output schema, the description adequately covers purpose and key behavior. It could mention the output (e.g., tax amount), but the context is sufficient given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds minimal meaning beyond the schema; it links holding_period_months to the short/long-term distinction, but this is already implied by the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates US capital gains tax, distinguishing between short-term (ordinary rates) and long-term (preferential rates). It uses a specific verb ('calculate') and resource ('US capital gains tax'), differentiating it from sibling tax tools like calculate_us_federal_tax.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, such as other tax calculation tools. There are no prerequisites, exclusions, or context about its applicability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_child_tax_creditAInspect

Calculate US Child Tax Credit for 2026 with phase-out based on AGI. Returns: {agi, base_credit, credit_reduction, final_credit}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
agiYesAdjusted Gross Income in USD
filing_statusNoFiling statussingle
children_under_17YesNumber of qualifying children under age 17

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It correctly indicates a calculation with phase-out, but does not describe the output (e.g., credit amount, eligibility flag) or any side effects. It is not misleading but lacks detail on return behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that includes essential information (year, phase-out) with no fluff. Every word earns its place, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description omits critical details such as the maximum credit per child, phase-out thresholds, and the nature of the return value. For a financial calculation tool, this under-specification reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage, so the description's role is to add context. It mentions 'phase-out based on AGI', which hints at AGI's role but does not specify how filing_status or children_under_17 affect the calculation. This adds marginal value beyond the schema, earning a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it calculates the US Child Tax Credit for 2026 with a phase-out based on AGI, which is a specific verb+resource combination. It clearly distinguishes itself from the many sibling tax calculators by specifying the exact credit and year.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternatives among the many sibling tax calculators. While the name and description imply a specific use case, the lack of comparative guidance leaves room for ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_federal_taxAInspect

Calculate US federal income tax for 2026 using progressive brackets with standard deduction. Returns: {gross_income, standard_deduction, taxable_income, federal_tax, effective_rate_pct, marginal_rate_pct, ...}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
incomeYesGross annual income in USD
filing_statusNoFiling statussingle

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides some behavioral context (progressive brackets, standard deduction) but does not detail what the tool does beyond input handling or return format, leaving gaps about deductions, credits, or edge cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is concise and front-loaded with the essential action, though slightly more structure could improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers purpose and method for a simple 2-param tool without output schema, but lacks details about return values, edge cases, or additional features like deductions beyond standard.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds no significant meaning beyond what the schema already provides for income and filing_status.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates US federal income tax for 2026 using progressive brackets and standard deduction, distinguishing it from state or foreign tax calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for US federal tax calculation but lacks explicit guidance on when to use versus alternatives like state or foreign tax tools, and does not mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_ficaBInspect

Calculate US FICA taxes (Social Security + Medicare) employee share for 2026. Returns: {gross_annual, social_security_taxable, social_security_tax, medicare_base_tax, medicare_additional_tax, medicare_total, ...}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
gross_annualYesGross annual salary in USD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description lacks behavioral details such as whether it handles Social Security wage caps or additional Medicare taxes, and does not disclose the output format. With no annotations, the description should provide more context about the calculation behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that immediately communicates the tool's purpose. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description should explain the return value (e.g., tax amount, breakdown) or assumptions. It only states the action, leaving the agent uninformed about the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the parameter 'gross_annual' already described as 'Gross annual salary in USD'. The description adds no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates US FICA taxes (Social Security and Medicare) employee share for 2026. It specifies the exact tax type and year, distinguishing it from other tax calculators on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other tax-related calculators (e.g., calculate_us_federal_tax). The description does not mention prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_mortgageBInspect

Calculate US mortgage with PMI, property tax, and insurance estimates. Returns: {home_price, down_payment, loan_amount, monthly_pi, monthly_pmi, monthly_property_tax, ...}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearsNoLoan term in years (default 30)
home_priceYesHome purchase price in USD
annual_rateYesAnnual mortgage interest rate in %
down_payment_pctNoDown payment percentage (default 20%)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states it calculates estimates but does not disclose limitations, assumptions, output format, or what aspects are included/excluded. The behavioral traits are not transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise but lacking detail. It is front-loaded but could be more structured. It earns its place but at the expense of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should be more complete. It does not specify what the output will be (e.g., monthly payment, total cost). With many sibling tools, more context would help the agent choose correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond the schema; it only mentions the components (PMI, tax, insurance) but does not explain how parameters like down_payment_pct or years relate. No extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates a US mortgage including PMI, property tax, and insurance estimates. It uses a specific verb and resource, and distinguishes itself from sibling tools like 'calculate_mortgage' by specifying US and additional components.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide any guidance on when to use this tool versus alternatives. It does not mention exclusions or context where other tools might be more appropriate. The usage is only implied by the name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_paycheckAInspect

Estimate US bi-weekly net paycheck after federal/state withholding and FICA. Returns: {annual_salary, fica_biweekly, net_biweekly, net_annual_estimate}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateNoState for state income tax (TX/FL/WA have no state tax)TX
annual_salaryYesAnnual salary in USD
filing_statusNoFederal withholding filing statussingle

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description discloses main behavior (estimates net pay after key deductions) but lacks detail on limitations, assumptions, or excluded deductions (e.g., pre-tax 401k). With no annotations, the description carries full burden but only partially addresses behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence front-loads all essential information: verb, resource, frequency, and deductions. No redundancy or unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but the tool's purpose is straightforward (returns an estimated net amount). Description explains the deduction components, which is sufficient for an agent to understand the tool's scope. Minor gap regarding return format (e.g., number or object) but not critical for a calculator.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear parameter descriptions (state, annual_salary, filing_status). The tool description adds overall context (net paycheck, deductions) but does not provide additional parameter-specific semantics beyond what schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Estimate US bi-weekly net paycheck after federal/state withholding and FICA', specifying verb (estimate), resource (US bi-weekly net paycheck), and scope (withholding and FICA). It is distinct from sibling tools like calculate_us_federal_tax or calculate_us_fica, as it provides a combined net estimate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other US tax or paycheck calculators. Does not mention alternatives, prerequisites, or situations where this tool is inappropriate (e.g., for detailed tax breakdowns). Agent must infer usage from name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_property_taxBInspect

Estimate annual US property tax by state using effective tax rates. Returns: {home_value, effective_rate_pct, annual_property_tax, monthly_property_tax}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateNoState (affects effective property tax rate)TX
home_valueYesAssessed home value in USD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must fully disclose behavioral traits. It states 'estimate' but provides no details on accuracy, data sources, or limitations (e.g., only six states). The tool's behavior remains opaque.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It is front-loaded with the core purpose, though it could briefly note the supported states or output format without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain what the tool returns. It does not mention the return value (likely an annual amount) or any rounding/precision. Also, the limitation to six states is omitted, making the description incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The tool description adds no new parameter meaning beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'estimate', the resource 'annual US property tax', and the method 'by state using effective tax rates'. It effectively distinguishes from siblings like French property tax calculators (calculate_property_tax_fr) that target other jurisdictions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternative guidance is provided. The description implies usage for US property tax estimation but does not specify that only six states are supported or contrast with other US tax calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_state_taxBInspect

Compute US state income tax for a chosen state. Use for paycheck planning across states. Inputs: state, gross income, filing status. Returns state tax due and effective rate. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYesUS state
incomeYesAnnual taxable income in USD

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description does not disclose behavioral traits beyond 'estimate' and 'simplified rates'. It doesn't mention output format, return value, or constraints like income limits or rate precision, which are important for a tax calculator.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with no redundant information. Efficiently communicates purpose and scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should explain what the tool returns (e.g., tax amount, effective rate, or bracket). It only says 'estimate', leaving the agent uncertain about the output. Input schema is simple, but the description lacks crucial context about the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% both parameters have descriptions. The description adds 'major states' and 'simplified rates' but no additional meaning beyond what the schema provides (state enum, income number). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates US state income tax, specifies it covers major states, and notes simplified rates. It effectively distinguishes from many sibling tax calculators for other regions (e.g., French, Quebec, Swiss).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for major US state tax estimation but lacks explicit guidance on when to use versus alternatives, or limitations (e.g., not for all states, simplified rates). No when-not or alternative tool references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_us_student_loanBInspect

Calculate US student loan repayment under standard, graduated, or income-driven plans. Returns: {loan_balance}. See list_bundles for related 'finance-us' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
planNoRepayment planstandard
annual_rateYesAnnual interest rate in %
loan_balanceYesOutstanding loan balance in USD
annual_incomeNoAnnual income (required for income_driven plan)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description does not disclose what the tool returns (e.g., monthly payment, total interest), nor that annual_income is only required for income_driven plan. Mutating behavior is implied but not clarified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no filler. Efficiently conveys core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks details about output format (e.g., monthly payment, amortization schedule) and conditional requirement for annual_income. With 4 parameters and no output schema, a more comprehensive description is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. Description adds context for plan enum (standard, graduated, income-driven) but no additional syntax or constraints beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it calculates US student loan repayment under three specific plans. Distinct from siblings like calculate_uk_student_loan and generic calculate_student_loan_repayment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., generic student loan calculator, UK-specific). Does not mention prerequisites like 'only for US federal loans' or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_vacation_days_frCInspect

Compute French paid vacation days earned (congés payés). Use for HR planning. Inputs: months worked, contract type. Returns days earned (2.5/month rule) and equivalent in working days. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
full_timeNoFull-time
months_workedYesMonths worked

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It only states the basic purpose and omits details such as the calculation method, legal basis, or any constraints like rounding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at four words, but it sacrifices informativeness. It could include brief context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity, the description is still incomplete. It fails to explain the calculation basis (e.g., French labor law) or what the output represents.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond what the schema already provides for the two parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'French vacation days earned' clearly indicates the tool calculates earned vacation days for France. It uses a specific verb and resource, but does not differentiate from the sibling tool 'calculate_vacation_days_optimal'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'calculate_vacation_days_optimal'. There is no context for usage scenarios or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_vacation_days_optimalCInspect

Compute optimal vacation usage by chaining bridge days with public holidays. Use for HR or worker planning. Inputs: vacation days, country, year. Returns best plan with day count. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
public_holidays_countYesNumber of public holidays near weekends
vacation_days_availableYesAnnual vacation days available

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It does not disclose any behavioral traits such as side effects, required permissions, or rate limits. It only says 'calculate', implying a read-only operation, but this is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no wasted words. It is appropriately sized for a tool with simple parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite simple parameters and no output schema, the description is too brief. It fails to explain what 'optimal' means, how bridge days are calculated, or what the output represents. This leaves the agent without enough context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with well-described parameters (public_holidays_count and vacation_days_available). The description adds no additional meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates optimal vacation usage with bridge days. It distinguishes from siblings like 'calculate_vacation_days_fr' (French-specific) and 'calculate_leave_days' (generic). However, 'optimal' is vague and could be more specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or context. The description only states what it does, not when to prefer it over similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_vat_genericBInspect

Calculate VAT/GST/sales tax for any country with custom rate. Returns: {amount_before_tax, amount_after_tax, tax_amount, tax_rate}. See list_bundles for related 'finance-universal' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoht=before tax, ttc=after taxht
rateYesTax rate in %
amountYesAmount

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description must fully disclose behavioral traits. It only states 'calculate' with no mention of idempotency, side effects, or safety. Given the tool performs a pure computation, minimal disclosure is acceptable but still lacking for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the tool's purpose. It could be expanded with usage guidelines without losing brevity, but currently it is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tax calculator with three parameters and no output schema, the description provides the essential purpose. However, it lacks information about return values and edge cases, making it merely adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all three parameters with descriptions (100% coverage). The description adds the context 'for any country' which is not in schema, but does not enhance individual parameter understanding, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates VAT/GST/sales tax for any country with a custom rate. It uses specific verb and resource ('calculate VAT/GST/sales tax') and distinguishes itself from country-specific sibling tools like 'calculate_belgian_vat' by being generic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for any country with a custom rate but does not explicitly state when to use this tool instead of country-specific ones. No guidance on prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_vat_reverseCInspect

Reverse-VAT: extract the VAT and net price from a TTC amount. Use to back out tax from a gross invoice. Inputs: TTC amount, VAT rate %. Returns HT, VAT amount. See list_bundles for related 'finance-france' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
vat_rateNoVAT rate %
amount_inclYesAmount including VAT

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for behavioral disclosure. It fails to mention that this is a read-only calculation, whether it requires any authentication, or what the output format is. The single sentence only restates the tool's name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only three words, which is under-specified. While concise, it lacks necessary detail, making it insufficient for effective tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description should explain what the tool returns (e.g., calculated net amount or VAT). It does not, leaving the agent uninformed about the result format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both parameters (amount_incl, vat_rate). The description adds no extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Reverse VAT calculation' is a tautology of the tool name 'calculate_vat_reverse', offering no additional specificity about what exactly is computed (e.g., net amount, VAT amount). It does not distinguish the tool from sibling tools like 'calculate_vat_generic'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., standard VAT calculation tools) or any prerequisites. The description lacks usage context entirely.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_velo_developmentCInspect

Calculate bicycle development in meters per pedal revolution. Returns: {gear_ratio, development_m, speed_at_90rpm_kmh}. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cog_teethYesNumber of teeth on the rear cog/sprocket
chainring_teethYesNumber of teeth on the front chainring
wheel_circumference_mmNoWheel circumference in mm (700c road default = 2105mm)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It only states the calculation and unit, without mentioning that it is a pure computation with no side effects, no data modification, or any other behavioral context. For a calculator, this is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no extra words. It is front-loaded with the action and purpose. While very concise, it effectively communicates the essential information without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity of a calculator tool with fully described parameters and no output schema, the description is adequate but not strong. It could be more complete by mentioning the output format or differentiating from sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (each parameter has a description). The tool description adds no additional meaning beyond what the schema already provides, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the verb 'Calculate', the resource 'bicycle development', and the unit 'meters per pedal revolution', making the purpose clear. However, it does not explicitly distinguish this tool from similar sibling tools like 'calculate_braquet' or 'calculate_gear_ratio', which could cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There are no explicit or implied usage contexts, when-not-to-use conditions, or references to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_vmaAInspect

Compute VMA (Maximal Aerobic Speed) from a fitness test result. Use for runners building training plans. Inputs: test type (Cooper 12-min, Luc Léger), result. Returns VMA km/h and zones. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
testYesTest type: cooper (12min run), demi_cooper (6min run), vameval (final speed km/h)
result_valueYesDistance in meters (cooper/demi_cooper) or final speed in km/h (vameval)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the purpose without revealing any behavioral traits: no mention of output format, validation behavior, error handling, or side effects. This is insufficient for a tool that performs a calculation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, front-loading the key purpose and supported tests. Every word earns its place without redundancy or clutter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter tool with no output schema, the description provides the essential context: what it does and which inputs it accepts. It lacks only a mention of the output format (e.g., speed in km/h), but overall it is nearly complete for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides complete descriptions for both parameters (test and result_value), covering 100% of parameters. The tool description adds no new information beyond what the schema states, meeting the baseline expectation but providing no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Estimate') and a clear resource ('Maximal Aerobic Speed (VMA)'), and lists the exact test types supported. This clearly differentiates it from sibling tools like 'calculate_vo2max' and other fitness calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly names the three supported tests (Cooper, demi-Cooper, Vameval), which implicitly tells when to use this tool. However, it does not mention when not to use it or suggest alternatives for other test types, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_vo2maxCInspect

Estimate VO2max from VMA (maximal aerobic speed). Use for runners assessing cardio fitness. Formula: VO2max ≈ VMA × 3.5. Inputs: VMA in km/h. Returns VO2max in mL/kg/min and fitness category. See list_bundles for related 'sport' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
vmaYesVMA in km/h

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must carry the burden. It states 'Estimate' implying a calculation, but does not disclose whether it is read-only, requires specific inputs, or any side effects. For a simple estimation tool, the description is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one short sentence). It front-loads the purpose but could include a bit more context without becoming verbose. It is concise but not overly terse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and no output schema, the description is minimally complete for a straightforward calculation. However, it lacks context like the formula used, expected output format, or prerequisites (e.g., VMA must be known). It is adequate but has gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not add meaning beyond the input schema. Schema coverage is 100% and the schema already describes 'vma' as 'VMA in km/h'. The description offers no additional parameter context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the verb 'Estimate' and the resource 'VO2max from VMA', clearly indicating the tool's purpose. However, it does not differentiate from sibling tools like 'calculate_vma' or other fitness-related calculators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., when VMA is already known vs needing to compute VMA first). The description lacks context for appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_voltage_adapterBInspect

Determine voltage adapter and plug type needed for a destination country. Use for international travel with electronics. Inputs: home country, destination, device wattage. Returns adapter type, voltage, plug shape. See list_bundles for related 'voyage' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_countryYesDestination country
from_countryYesCountry of origin

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It fails to mention what the tool returns (e.g., boolean, text) or any side effects, leaving the agent without critical output information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no filler, but it could be improved by including output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks specification of the return value, which is essential for an AI agent. Without output schema, this gap significantly reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter names and enums. The description adds no extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool determines if a voltage adapter is needed between two countries, using a specific verb and resource. It distinguishes itself from sibling calculate tools by focusing on travel adapter needs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage in travel planning but does not explicitly state when to use this tool versus alternatives or provide any when-not guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_volumeCInspect

Compute volume for common 3D shapes (cube, cylinder, sphere, cone, prism). Use for geometry, packaging, or construction. Inputs: shape + dimensions. Returns volume in input-units cubed. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
shapeYesShape
widthNoWidth
heightNoHeight
lengthNoLength/side
radiusNoRadius
base_areaNoBase area for prism/pyramid

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It only mentions 'calculate', which implies a read-only computation, but omits details like output format, side effects, or prerequisites.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly brief for a tool handling 7 shapes with different parameter requirements. It lacks structure and essential details, making it under-specified rather than concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple shapes, no output schema), the description fails to provide necessary context: no parameter selection logic, no output format description, no examples.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema description coverage is 100%, parameter descriptions are tautological (e.g., 'Width', 'Height'). The tool description does not explain which parameters are needed for each shape, missing critical relationships.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates volume for common 3D shapes, which is a specific verb+resource. However, it does not differentiate from sibling tools like calculate_cone or calculate_cylinder, which may cause confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this general tool versus the specialized sibling tools for individual shapes. The description lacks any when-to-use or when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_waist_hip_ratioCInspect

Calculate waist-to-hip ratio and cardiovascular risk level. Returns: {risk_threshold, cardiovascular_risk}. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
sexYesBiological sex
hip_cmYesHip circumference in centimeters
waist_cmYesWaist circumference in centimeters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states what the tool calculates but does not describe any side effects, data handling, permissions, or output interpretation. Lacks sufficient transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, very concise with no unnecessary information. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a health calculation tool without output schema or annotations, the description should provide more context about the risk level output, interpretation, or calculation methodology. It is incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions for waist_cm, hip_cm, and sex. The description adds no extra meaning beyond the schema, so a baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates waist-to-hip ratio and cardiovascular risk level, which is specific and distinct from other calculation tools. However, it does not explicitly differentiate from siblings like calculate_bmi or calculate_body_fat, so a 4 is appropriate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no prerequisites or exclusions mentioned. The description lacks context for appropriate use in health-related calculations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_wallpaperCInspect

Compute wallpaper rolls needed for a room with pattern repeat factor. Use for renovation budget. Inputs: room dimensions, roll size, pattern repeat. Returns roll count. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
height_mYesHeight m
roll_widthNoRoll width m
perimeter_mYesRoom perimeter m
roll_lengthNoRoll length m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description offers no behavioral insights such as whether the result rounds up, accounts for pattern repeats, or returns an integer. The agent gets no clue about the tool's behavior beyond its name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence with no fluff, but it is too brief to be fully informative. It could benefit from elaborating on the calculation scope or output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature (4 numeric parameters, no output schema), the description fails to explain what the output represents (e.g., number of rolls, integer vs decimal) or any assumptions about pattern matching. It leaves significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the input schema fully documents the parameters. The description adds no extra meaning beyond what is already in the schema, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Calculate wallpaper rolls needed' essentially restates the tool name without adding specificity or distinguishing it from sibling tools like calculate_paint_needed or calculate_tile_quantity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Among siblings focusing on similar room renovation calculations, there is no context for preferring this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_wallpaper_rollsCInspect

Compute wallpaper rolls for a room including pattern repeat and waste. Use for renovation. Inputs: walls m², roll dimensions, pattern repeat. Returns roll count and length needed. See list_bundles for related 'construction' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
height_mYesWall height m
roll_widthNoRoll width m
perimeter_mYesRoom perimeter m
roll_lengthNoRoll length m

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits, but it only states purpose. It does not clarify that it performs a calculation without side effects, nor any assumptions about units or output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise but overly brief, lacking useful structure. It could be expanded with more detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is critically incomplete. It does not explain the calculation purpose, required inputs, or how to interpret results, especially given no output schema or annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes parameters. The description adds no extra context about parameter meaning or usage, but baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Wallpaper rolls needed' vaguely indicates the tool calculates wallpaper rolls, but it does not distinguish from the sibling tool 'calculate_wallpaper', leaving ambiguity about which to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'calculate_wallpaper' or other room area calculators. Context for appropriate use is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_water_billAInspect

Compute water bill from cubic meters consumed and tariff bands. Use for household budget. Inputs: m³ consumed, fixed fee, variable rate. Returns total bill. See list_bundles for related 'vie-quotidienne' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
cubic_metersYesWater consumption in m³
price_per_m3NoPrice per m³ (default 4.34 EUR — France 2026)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the core behavior (calculation from cubic meters) but does not mention side effects, access needs, or error conditions. For a pure calculation, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, short, front-loaded sentence with no wasted words. Every word contributes to understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, two parameters with full schema descriptions, and no output schema needed, the description is complete. No missing information that would hinder proper usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema itself documents both parameters well. The description adds no additional meaning beyond what is already in the schema, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'water bill', and specifies the input 'cubic meters consumed'. It distinguishes itself from other calculate tools by naming the specific resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is given on when to use this tool versus alternatives. However, for a simple calculation tool, the context is clear and no further exclusions are needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_water_hardnessBInspect

Calculate water hardness in French degrees from calcium and magnesium concentrations. Returns: {thresholds}. See list_bundles for related 'cuisine' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
calcium_mg_lYes
magnesium_mg_lYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as the formula used, potential rounding, or input validation beyond the schema's minimum constraints. The agent learns nothing beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no extraneous information, appropriately front-loading the purpose and inputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple calculation tool with two numeric parameters and no output schema, the description is adequate but incomplete: it does not specify the output format (e.g., a number, possibly rounded) or any edge cases (e.g., zero concentrations).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds context by stating the output unit (French degrees) and the source of the parameters (calcium and magnesium concentrations). However, with 0% schema description coverage, the description only partially compensates; it does not explain, for instance, that units are mg/L or how the formula works.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'calculate' with the resource 'water hardness in French degrees' and explicitly mentions the inputs (calcium and magnesium concentrations), making the purpose very clear and distinguishing it from the many other 'calculate' tools on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While no sibling tool directly competes, the description could mention conversion factors or applicable standards (e.g., French degree formula), but it does not.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_water_heater_sizeCInspect

Calculate recommended water heater tank size for a household. See list_bundles for related 'plomberie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
usageYesWater usage level: low (30L/person), normal (50L/person), high (70L/person)
household_sizeYesNumber of people in the household

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits, assumptions, or side effects. For a calculation tool, it is presumably read-only, but the description does not confirm this or mention any constraints like rounding or default units.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise without any extraneous words. It could be improved by including a brief note on output, but it is not verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity, the description is too minimal. It does not mention the output unit (e.g., liters or gallons) or any important context like the assumed water usage per person (though that is covered in the parameter enum). The schema describes parameters well, but the tool's purpose and output remain incompletely specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter already has a description. The tool description adds no extra meaning beyond what the schema provides. According to the rules, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate' and the resource 'water heater tank size', with context 'for a household'. It distinguishes itself from sibling tools by its specific resource. However, it could be more precise, e.g., specifying that it calculates recommended capacity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. With many calculation tools on the same server, such as calculate_aquarium_volume or calculate_water_intake, the agent has no hint about appropriate use cases or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_water_intakeCInspect

Compute recommended daily water intake in liters by weight, activity, climate. Use for hydration planning. Inputs: weight kg, activity level, climate (temperate/hot). Returns L/day and glass count. See list_bundles for related 'sante' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
climateYesClimate
weight_kgYesBody weight kg
activity_levelYesActivity level

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states the purpose but does not disclose any behavioral traits such as output format, assumptions, or side effects. Minimal disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, very concise and front-loaded. However, it does not include the climate parameter, making it slightly incomplete.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With three parameters (including enums) and no output schema, the description lacks details on output format, calculation logic, or interpretation of results. It is incomplete for a tool that likely produces a recommendation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions for each parameter (e.g., 'Body weight kg', 'Activity level', 'Climate'). The description adds no additional meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates recommended daily water intake based on weight and activity. It uses a specific verb and resource. However, it omits the climate parameter from the description, which is a required input, and does not distinguish from sibling tools, though the sibling set is large and varied.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no prerequisites or exclusions stated. The description provides no context for appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_water_pressure_lossBInspect

Calculate water pressure loss in a pipe circuit including friction and elevation. Returns: {equiv_length_m}. See list_bundles for related 'plomberie' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
diameter_mmYesPipe internal diameter in millimeters
flow_rate_lpmYesFlow rate in liters per minute
pipe_length_mYesPipe length in meters
fittings_countNoNumber of fittings and elbows (each adds ~0.5m equivalent length)
elevation_change_mNoElevation change in meters (positive = uphill, negative = downhill)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavior. It mentions friction and elevation but omits the calculation method (e.g., Darcy-Weisbach), assumptions, or limitations. The output format and units are not stated, which is critical for a calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that effectively communicates the tool's primary function. Every word is meaningful and there is no superfluous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should explain what the result is (e.g., pressure loss in Pascals or bars). Without this, the agent cannot determine the expected output. Additionally, no edge cases or error conditions are mentioned, leaving the description incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with each parameter's purpose clearly defined (e.g., diameter, flow rate, elevation). The description adds 'friction and elevation' but does not enhance parameter meaning beyond what the schema already provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates water pressure loss in a pipe circuit, specifying inclusion of friction and elevation. This distinguishes it from many sibling calculator tools that cover different domains or specific parameters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. The context of pipe circuit calculations is implied but not elaborated. Alternatives like calculate_hydraulic_pressure or calculate_pipe_diameter exist on the server, but no differentiation is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_wavelength_frequencyAInspect

Solve c=λ·f for EM waves. Provide wavelength or frequency. Also returns photon energy E=hf. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
frequency_hzNoFrequency in Hz
wavelength_mNoWavelength in meters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosures. It correctly indicates a stateless calculation and mentions the equation, but does not disclose the use of standard constants (e.g., speed of light, Planck's constant) or any input range limitations. Some ambiguities remain regarding the exact outputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences with no unnecessary words. It front-loads the key equation and purpose, making it easy to parse quickly. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially covers return values by mentioning photon energy, but it does not explicitly state that both wavelength and frequency are returned. This could lead to ambiguity. For a simple tool, it is adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds value by indicating that only one parameter (wavelength or frequency) is needed, and that the tool will compute the other plus photon energy. This helps the agent understand the optionality of inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool solves the wave equation c=λ·f for EM waves, explicitly listing the inputs (wavelength or frequency) and output (photon energy). The name 'calculate_wavelength_frequency' aligns well with the function. It is well-distinguished from sibling tools by focusing on wave properties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have either frequency or wavelength, and it returns the missing value plus energy. However, it does not explicitly state when to use this tool over alternative physics calculators, nor does it provide conditions for when not to use it. The phrase 'Provide wavelength or frequency' is clear but lacks depth.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_wave_propertiesCInspect

Compute wave frequency, wavelength, or period from any two. Formula: c=λ·f. Use for physics or acoustics. Inputs: any 2 of (frequency Hz, wavelength m, speed m/s). Returns missing values. See list_bundles for related 'science' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
speed_msNoWave speed m/s (343=sound)
frequency_hzYesFrequency in Hz

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description does not disclose any behavioral traits beyond the schema. It does not state that the tool performs a calculation, what the output represents, or any side effects. The agent must infer behavior from the name and schema, which is ambiguous.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (4 words) but lacks a clear structure or a verb. It is front-loaded but underspecified. While brevity is valued, it sacrifices clarity and completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description must explain what the tool returns, but it does not. The tool has two input parameters and likely computes derived values, yet no hint of the relationship (e.g., v = fλ). The description is insufficient for an agent to understand the tool's functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters described adequately in the schema (speed in m/s, frequency in Hz). The description adds no additional meaning or contextual explanation, but the schema already provides sufficient parameter semantics. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is merely a noun phrase listing physics terms ('Wave frequency, wavelength, period'), lacking a verb or action statement. It does not specify what the tool calculates (e.g., wavelength from frequency and speed, or period). This is vague and fails to distinguish it from sibling tools like 'calculate_wavelength_frequency'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. No context is given about prerequisites, assumptions (e.g., medium), or scenarios where it is appropriate. The agent has no basis to choose this over similar physics calculators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_wind_chillBInspect

Calculate the perceived wind chill temperature (Environment Canada formula). Returns: {feels_colder_by_degrees}. See list_bundles for related 'astronomie-nature' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
temperature_cYesAir temperature in degrees C (must be 10C or below)
wind_speed_kmhYesWind speed in km/h

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It mentions the formula (Environment Canada) but does not disclose any behavioral traits like input validation, error handling, or output format. This is minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that conveys essential information without any fluff. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool and full schema coverage, the description is adequate but not rich. It lacks details about output format, units, or special cases, leaving some gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters. The description adds the context of 'perceived' temperature and the specific formula, but does not add meaning beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates wind chill temperature using the Environment Canada formula. This is a specific verb-resource pair and distinguishes it from the many other calculate_* sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor any conditions or exclusions. It simply states what it does without context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_wire_gauge_convertAInspect

Convert AWG wire gauge to diameter (mm) and resistance (ohms/m). Use for electrical projects. Inputs: AWG number. Returns diameter, area, resistance, max current. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
awgYesAWG gauge number

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It discloses that the tool converts to diameter and resistance, but lacks details on output format, units, precision, or error handling. This is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that immediately conveys the tool's purpose. It is front-loaded with the action and resource, with no extraneous words. Perfectly concise for the information provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter, 100% schema coverage, and no output schema, the description is functional but lacks details on what exactly is returned (both diameter and resistance? units?). It is adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no additional meaning beyond the schema. The schema already describes the 'awg' parameter as an integer 0-40. The mention of 'diameter and resistance' in the description relates to the tool's output, not parameter semantics. Baseline 3 applies due to 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts AWG wire gauge to diameter and resistance. It uses a specific verb 'Convert' and identifies the resource 'AWG wire gauge', making its purpose unambiguous. With many sibling tools, this description effectively distinguishes this tool's functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool vs alternatives, no prerequisites, and no exclusions. It simply states the conversion action without context on appropriate use cases or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_working_daysAInspect

Count working days Mon-Fri between two dates with optional French public holiday exclusion. Returns: {working_days, public_holidays_excluded}. See list_bundles for related 'temps-rh' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry for public holidays (FR supported)FR
end_dateYesYYYY-MM-DD — End date
start_dateYesYYYY-MM-DD — Start date

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of disclosing behavior. It explains the core logic (weekdays Mon-Fri, optional holiday exclusion) but does not address edge cases, error handling, or return type.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that communicates the tool's essential purpose without any extraneous words. Every word serves a clear function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the action but lacks information about return values (e.g., integer, object), which is problematic since there is no output schema. It is adequate for a simple tool but incomplete in terms of expected output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already documented. The description adds context about French public holiday exclusion via the country parameter but does not elaborate on the 'OTHER' option or provide additional usage details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description includes a specific verb ('Count'), a clear resource ('working days Mon-Fri between two dates'), and mentions optional French public holiday exclusion. This unambiguously defines the tool's purpose and distinguishes it from other date calculation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for counting weekdays but does not explicitly state when to use this tool over alternatives like 'calculate_days_between', nor does it mention any prerequisites or constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_z_scoreCInspect

Compute z-score (standardized score) and percentile from a value, mean, and standard deviation. Use for statistics and outlier detection. Returns z, percentile, p-value. See list_bundles for related 'math' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
meanYesPopulation mean
valueYesObserved value
std_devYesStandard deviation

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must convey behavioral traits. It only states the tool computes z-score and percentile, without disclosing whether it returns both, requires normal distribution assumption, or any other behavioral details. This is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very brief, which is concise in word count but lacks structure and fails to provide essential information. It is not overly verbose, but brevity at the expense of clarity reduces effectiveness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain what the tool returns (e.g., both z-score and percentile). It omits this and also does not mention any underlying assumptions (e.g., normal distribution). The tool is simple, but the description is incomplete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already covers all parameters with 100% description coverage (mean, value, std_dev). The description adds no extra meaning beyond what the schema provides, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Z-score and percentile' indicates the tool computes these statistical measures, which is adequate but lacks a clear verb-action (e.g., 'Calculate' or 'Get'). It distinguishes from siblings like 'calculate_percentile_rank' but does not explicitly differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. For example, a sibling 'calculate_percentile_rank' exists, but there is no mention of how this tool differs or when each is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_angleBInspect

Convert angle between degrees, radians, gradians, turns, arcminutes, arcseconds. Use for math, navigation, surveying. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesAngle value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description should disclose behavioral traits, but it only states the purpose. It does not mention any side effects, read-only behavior, or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the purpose. Every word is meaningful and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the schema's richness, and the lack of output schema, the description is adequate. It covers the core purpose, though additional usage guidance would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter, so baseline is 3. The description adds no additional meaning beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Convert between angle units' clearly states the verb 'convert' and the resource 'angle units', distinguishing it from sibling tools like convert_temperature or convert_speed. It is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus other conversion tools or alternatives. It lacks any context about prerequisites or conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_areaBInspect

Convert area between m², km², ha, are, acre, ft², yd², cm². Use for real estate, agriculture, or construction. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesArea value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided and the description does not disclose any behavioral traits such as error handling, precision, or side effects. The description is too vague.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with 4 words, front-loaded, no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple unit conversion tool, the description is minimally complete. However, with no output schema or annotations, it lacks behavioral details. Scores a 3.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds no additional meaning beyond the schema. Baseline score of 3 applied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Convert between area units' clearly states the specific verb (convert) and resource (area units), distinguishing it from sibling convert_* tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use or avoid this tool, nor alternatives mentioned. Sibling tools exist but no explicit usage context is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_cookingBInspect

Convert cooking measurements between ml, L, g, kg, cup, tbsp, tsp, fl_oz, oz. Use for recipe scaling and translating. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesQuantity

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It only repeats the purpose without adding any nuance about side effects, error handling, or limitations (e.g., that it only handles volume units). The description adds no behavioral context beyond what the schema already implies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that is front-loaded with the key action and object. It is efficient, though it could be slightly more informative by explicitly stating the supported unit types.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple nature and the schema's enums, the description is minimally complete. It lacks explicit mention that only volume units are supported, which a user might infer from the schema but is not stated in prose. For a tool with no output schema and moderate complexity, this is adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with descriptions, so the description does not need to elaborate. However, it adds no extra meaning about how the parameters interact or constraints (e.g., that only cooking volume units are supported), resulting in a neutral score at the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert' and the resource 'cooking measurement units', making the tool's purpose immediately obvious. It distinguishes from sibling tools like convert_volume or convert_weight by specifying 'cooking', implying a domain-specific set of units.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as convert_volume or other cooking-related converts. There is no mention of when not to use it or what scenarios it is optimized for, leaving the agent to infer usage solely from the tool name and schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_data_storageBInspect

Convert between digital storage units (binary and decimal). Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesStorage value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states the conversion action without disclosing any behavioral traits like rounding, precision, or output format. This is minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that efficiently conveys the tool's purpose. It is appropriately sized but could benefit from including behavioral details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is adequate for a simple conversion tool, but it omits mention of output format or return value. Given the lack of output schema, this gap reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond the schema's property descriptions, which already define value, from, and to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies the verb 'Convert' and the resource 'digital storage units', further refined with binary and decimal scopes. This distinguishes it from sibling conversion tools like convert_angle or convert_area.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this tool is for storage unit conversions, but provides no explicit guidance on when to use it over alternatives, such as when to prefer binary over decimal or vice versa.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_distanceBInspect

Convert distance between metric (m, km, cm, mm) and imperial (in, ft, yd, mi) plus nautical miles. Use for travel, sport, or engineering. Inputs: value, from, to. Returns: {input, factor}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesDistance value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It only says 'Convert between distance/length units' without detailing precision, rounding, return format, or error handling. Minimal transparency for a conversion tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no extraneous information. However, it could be slightly more detailed without becoming verbose, so not a perfect 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (3 required params, conversion operation) and full schema coverage, the description is adequate but fails to mention return format, edge cases, or precision. No output schema to compensate, so baseline completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all parameters, including enums for 'from' and 'to'. The description adds no additional meaning beyond what the schema already provides, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts between distance/length units, using a specific verb ('convert') and resource type ('distance/length units'), distinguishing it from sibling conversion tools for other units like area or temperature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for distance/length conversions through its naming and sibling context, but provides no explicit guidance on when to use this tool versus alternatives (e.g., convert_angle, convert_area). No when-not or contextual hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_energyBInspect

Convert energy between J, kJ, cal, kcal, kWh, BTU, eV, ft-lb. Use for nutrition, electricity, science. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesEnergy value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states 'convert' without explaining rounding, precision, error handling, or any side effects. This is insufficient for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no wasted words. However, it is not front-loaded with the most critical information; it simply states the obvious.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple conversion tool, the description is adequate but minimal. It does not specify the output format or behavior with invalid values, though the schema covers inputs well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters, including enums for 'from' and 'to'. The description adds no extra meaning beyond the schema, so the baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Convert between energy units', clearly defining the verb (convert) and resource (energy units). This distinguishes it from sibling conversion tools for other units such as angle, area, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention exclusions, prerequisites, or scenarios where other conversion tools would be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_fuel_consumptionCInspect

Convert fuel consumption between L/100km, mpg-US, mpg-UK, km/L. Use for car comparison across regions. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesConsumption value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral details such as rounding rules, handling of invalid inputs, or output format. The agent has limited information about what the tool does beyond conversion.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It is concise but lacks structural elements like front-loading key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is very minimal, lacking details on return values or behavior. Given no output schema and no annotations, the description should provide more context for a conversion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with each parameter having a brief description. The tool description adds no additional meaning beyond the schema, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts between fuel consumption units. The verb 'convert' and resource 'fuel consumption units' are specific, though it doesn't differentiate from sibling conversion tools beyond the topic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like other conversion tools or calculate_fuel_consumption. The description lacks context for appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_pressureCInspect

Convert pressure between Pa, kPa, MPa, bar, psi, atm, mmHg, mbar, torr. Use for engineering, tires, weather. Inputs: value, from-unit, to-unit. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesPressure value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states the basic conversion action but does not disclose any behavioral traits such as idempotency, side effects, or statelessness. For a simple conversion tool, this is a minimal but acceptable gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the verb. It is appropriately short for a simple tool, though it could be slightly more structured to include context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description covers the core action. However, it lacks usage guidelines and behavioral context, and the presence of a similar sibling tool suggests more completeness is needed. The schema provides details for parameters, so overall it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond what the schema already provides for the three parameters (value, from, to). No extra context about units or ranges is given.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description 'Convert between pressure units' clearly states the verb and resource. However, it does not differentiate from the sibling tool 'calculate_pressure_convert', which likely has overlapping functionality. The purpose is clear but lacks distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'calculate_pressure_convert'. There is no mention of prerequisites, limitations, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_shoe_sizeBInspect

Convert shoe sizes between EU, US (men/women), and UK systems. Use for online shopping. Inputs: size, from, to, gender. Returns converted size. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget system
fromYesSource system
valueYesShoe size

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It only states the basic conversion function without details on precision, rounding, or lack of side effects. For a simple conversion tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence. It is concise, but could benefit from a bit more context without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of a shoe size conversion tool and the complete schema, the description is adequate. However, it could improve by addressing the existence of the sibling tool 'calculate_shoe_size_convert'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions in the schema. The tool description adds no additional meaning beyond what the schema already provides, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Convert', the resource 'shoe size systems', and lists the specific systems (EU, US, UK). However, it does not differentiate from the sibling tool 'calculate_shoe_size_convert', which likely performs the same function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for converting shoe sizes, but provides no explicit guidance on when to use this tool versus alternatives, prerequisites, or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_speedBInspect

Convert speed between km/h, mph, m/s, knots, ft/s. Use for travel, sport, or maritime/aviation. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesSpeed value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It only says 'convert' without detailing any behavioral traits like rounding accuracy, output format, or limitations. The schema supplies unit enums, but the description adds no behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at five words, with no wasted text. However, it could include a brief mention of the supported units or the conversion nature without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple unit converter with a clear schema, the description is minimally adequate. It identifies the tool's function but omits details like the set of supported units (though schema provides that) or the output format. It could be more complete without overcomplicating.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for all three parameters (value, from, to). The description does not add any additional meaning beyond what the schema provides, meriting a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Convert between speed units' uses a specific verb (convert) and resource (speed units), clearly distinguishing it from sibling tools that convert other unit types like area or temperature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, such as other conversion tools or calculating speed from distance/time. The description provides no context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_temperatureAInspect

Convert temperature between Celsius, Fahrenheit, and Kelvin. Use for cooking, weather, science. Inputs: value, from, to. Returns converted temperature. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesTemperature value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It does not disclose any behavioral traits beyond conversion, but for a simple mathematical operation, the lack of side effects or complex behavior makes this adequate. A more detailed description could include handling of invalid inputs or precision, but the schema covers type constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that captures the tool's purpose with no wasted words. It is front-loaded and easy to read.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the absence of an output schema, the description does not explain the return format (e.g., a number with unit label). For such a straightforward conversion, this is a minor gap but could be improved for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (value, from, to each have descriptions). The tool description adds no additional meaning beyond what is already in the schema, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Convert between Celsius, Fahrenheit, and Kelvin' clearly states the action (convert) and the specific temperature units, distinguishing it from sibling tools that handle other conversions like angle or area.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates usage when temperature conversion is needed, and the tool's name and unit specificity provide clear context. However, it does not explicitly mention when not to use it or provide alternatives, but given the straightforward task, this is acceptable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_timeAInspect

Convert time between seconds, minutes, hours, days, weeks, months, years. Use for project planning or unit homogenization. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesTime value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as precision, rounding, handling of leap years, or month approximations. This leaves the agent without critical operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that directly conveys the tool's purpose without extraneous information. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (3 parameters, no nested objects, no output schema), the description provides the minimum viable context. However, it lacks details on conversion assumptions (e.g., month length) that would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter having a basic description (e.g., 'value' = 'Time value'). The tool description adds no additional meaning beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Convert between time units,' which clearly indicates the verb (convert) and resource (time units). It distinguishes this tool from sibling converters for other unit types (e.g., convert_distance, convert_temperature).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for time unit conversion but does not provide explicit guidance on when to use this tool versus alternatives, such as converting time zones or handling special cases like month definitions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_volumeCInspect

Convert volume between L, mL, cL, fl_oz, cup, tbsp, tsp, gal_us, gal_uk. Use for cooking and science. Inputs: value, from, to. Returns: {input}. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesVolume value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the minimal description does not disclose behavioral traits such as being a deterministic, stateless computation, or the nature of the return value. It adds no context beyond the tool's basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with no wasted words. It front-loads the core purpose, though it could benefit from a brief sentence on usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the rich schema, the description is minimally adequate. However, it omits details like the output format or any constraints, which would be helpful for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter already described. The tool description adds no additional meaning to the parameters, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool converts between volume units, providing a specific verb and resource. However, it does not differentiate from sibling conversion tools like convert_area or convert_distance, which use the same pattern.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives. The description is too generic to help an agent decide among the many similar conversion tools available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

convert_weightBInspect

Convert weight/mass between kg, g, mg, lb, oz, st, tonne. Use for cooking, shipping, or fitness. Inputs: value, from, to. Returns converted mass. See list_bundles for related 'conversions' calculators.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget unit
fromYesSource unit
valueYesWeight value

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It only states 'Convert between weight/mass units' without mentioning rounding, precision, limits, error handling, or output format. This is insufficient for safe and correct agent invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, concise sentence with no extraneous information. Every word is necessary and directly contributes to understanding the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given three required parameters, enum units, and no output schema, the description is too sparse. It doesn't explain return format, supported units (though enums exist), or edge cases like negative values or extreme inputs. More context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and parameters already have clear descriptions (source unit, target unit, value). The description adds no additional meaning beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Convert between weight/mass units' clearly states the verb (convert) and the resource (weight/mass units). The tool name 'convert_weight' and the sibling conversion tools make the purpose distinct and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Among siblings, there are many conversion tools (convert_angle, convert_area, etc.), but the description does not provide any criteria or exclusions. A 2 reflects the absence of any usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bundle_toolsBInspect

Get the list of tools in a specific bundle. Returns tool names and descriptions for the requested domain bundle.

ParametersJSON Schema
NameRequiredDescriptionDefault
bundle_idYesBundle ID from list_bundles

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears the full burden of behavioral disclosure. It only states that the tool 'returns tool names and descriptions' but does not mention that it is read-only (safe to call multiple times), requires any authentication, or has rate limits. For a simple retrieval tool, the lack of read-only indication and response size hints reduces transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long and contains no extraneous words. It front-loads the core action in the first sentence. However, it is so brief that it omits potentially valuable context (like the relationship to list_bundles) that could be added without bloat.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description is not complete enough. It does not specify the structure of the returned list (e.g., an array of objects with name and description fields), nor does it mention any prerequisites (e.g., calling list_bundles first). The lack of output schema means the description should provide more detail on the return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with the bundle_id parameter having an enum and a description already in the schema ('Bundle ID from list_bundles'). The description adds minimal semantic value beyond schema by restating the parameter's role in context ('Get the list of tools in a specific bundle'). This meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's action ('Get the list of tools') and its resource ('in a specific bundle'). It distinguishes from the sibling 'list_bundles' (which returns bundles, not tools) by stating it returns tool names and descriptions for a given bundle. The verb 'Get' and noun 'tools in a specific bundle' are specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it operates on a 'specific bundle', but it does not explicitly advise when to use this tool over alternatives (e.g., list_bundles). It lacks any when-not guidance, such as noting that the bundle_id must be obtained from list_bundles first. The usage context is inferred but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_bundlesAInspect

List all available tool bundles (grouped by domain). Use this to discover which tools are available for a specific topic instead of browsing all 500+ tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
_unusedNoNo parameters needed

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultNoComputed result. Object whose fields depend on the tool (e.g. {tax, marginal_rate, brackets} for tax tools, {volume_l, gallons} for volume tools).
sourceNoAuthoritative source for the rule or formula (e.g. "Article 197 CGI", "NF DTU 21").
formulaNoHuman-readable formula or method used (e.g. "I=P·r·t", "Magnus formula").
reference_urlNoLink to a calcul2 page documenting the calculation in detail.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool lists bundles, implying a read-only operation, but does not disclose return structure, pagination, or any potential side effects. The description is adequate for a simple list tool but lacks deeper behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, 21 words, and front-loads the purpose without any wasted words. It is efficiently concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no output schema, the description covers the core purpose and usage context. It does not explain the return format or relationship to sibling `get_bundle_tools`, but given low complexity, it is mostly complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage (dummy parameter with description 'No parameters needed'). The description adds no additional meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all available tool bundles grouped by domain, which is a specific verb+resource. It also distinguishes from browsing all 500+ tools individually. However, it lacks differentiation from the sibling tool `get_bundle_tools`, which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear context for using the tool (to discover tools by topic) and implies when not to use it (instead of browsing all tools). However, it does not explicitly mention when to avoid using it or compare it to alternatives like `get_bundle_tools`.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources