Skip to main content
Glama

Server Details

Negotiation math engine. Pareto frontier, counteroffer generation, zero LLM tokens.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
rjandino/zopaf
GitHub Stars
0
Server Listing
zopaf-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose in the negotiation coaching workflow, with no overlap or ambiguity. For example, create_session initiates a session, add_issue adds negotiable terms, analyze_deal evaluates deals, and process_counterpart_response refines offers based on counterpart reactions, all serving unique functions.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, such as create_session, add_issue, and set_batna. This uniformity makes the tool set predictable and easy to understand, with no deviations in naming conventions.

Tool Count5/5

With 9 tools, the server is well-scoped for its negotiation coaching domain, covering session management, issue handling, analysis, and response processing. Each tool earns its place by addressing specific aspects of the negotiation lifecycle without being excessive or insufficient.

Completeness5/5

The tool set provides complete coverage for negotiation coaching, including session creation, issue management, preference recording, BATNA setting, deal analysis, counteroffer generation, and response processing. There are no obvious gaps, enabling agents to handle the full negotiation workflow from start to finish.

Available Tools

9 tools
add_issueBInspect

Add a negotiable issue/term to the session. Options ordered WORST to BEST for the user (e.g., ['$150K', '$160K', '$170K', '$180K']).

ParametersJSON Schema
NameRequiredDescriptionDefault
optionsYes
issue_nameYes
session_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that options are ordered from worst to best for the user, which adds some context about the tool's behavior. However, it lacks details on permissions, side effects (e.g., whether this modifies an existing session), response format, or error handling, leaving significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and provides essential usage guidance without any wasted words. Every part of the sentence contributes to understanding the tool's purpose and parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters with 0% schema coverage and an output schema exists (which reduces the need to describe return values), the description is moderately complete. It covers the key behavioral aspect for 'options' but misses details on other parameters, permissions, and error cases, making it adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the 'options' parameter by explaining they should be ordered from worst to best for the user, which goes beyond the schema's minimal coverage (0%). Since there are 3 parameters and schema coverage is low, this compensation is valuable, though it doesn't cover 'session_id' or 'issue_name' semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Add') and resource ('negotiable issue/term to the session'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'set_issue_range' or 'record_preference', which might handle similar negotiation concepts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying that options should be ordered 'WORST to BEST for the user', providing some context for when to use this tool. However, it doesn't explicitly state when to use this versus alternatives like 'set_issue_range' or 'record_preference', nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

analyze_dealBInspect

Analyze a specific deal against the Pareto frontier. Returns deal quality, value captured, value left on table, and suggested trades to improve the deal.

ParametersJSON Schema
NameRequiredDescriptionDefault
dealYes
session_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the analysis returns specific metrics and suggestions, but lacks details on computational behavior (e.g., is it resource-intensive?), error handling, or dependencies (e.g., requires a valid session_id). This is a significant gap for a tool with complex input/output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficiently structured in two sentences: the first states the action and scope, the second lists the return values. Every sentence adds value without redundancy, making it appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (which handles return values) and the description's clear purpose and concise structure, it is mostly complete. However, gaps in behavioral transparency and parameter semantics reduce completeness, especially since annotations are absent and schema coverage is low.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter details. The description adds minimal semantics by mentioning 'deal' as the object to analyze, but does not explain what 'session_id' is or the structure/expectations for the 'deal' object. It partially compensates but leaves key parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('analyze a specific deal against the Pareto frontier') and the resource ('deal'), distinguishing it from siblings like 'generate_counteroffers' or 'get_negotiation_state'. It provides a concrete outcome: returns deal quality, value captured, value left on table, and suggested trades.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'generate_counteroffers' or 'set_batna'. The description implies usage for deal analysis but lacks context on prerequisites (e.g., after creating a session) or exclusions (e.g., not for initial deal creation).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_sessionAInspect

Create a new negotiation coaching session. Returns a session_id to use with all other tools. Call this first.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns a session_id for use with other tools, which is useful behavioral context. However, it lacks details on error conditions, authentication needs, or rate limits that would be helpful for a session creation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the purpose, the second provides critical usage guidance. It is front-loaded with essential information and appropriately sized for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, output schema exists), the description is mostly complete. It explains the purpose, usage order, and output utility. However, as a session creation tool with no annotations, it could benefit from mentioning potential errors or prerequisites, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on the tool's purpose and output rather than redundant parameter details, earning a baseline score of 4 for this scenario.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a new negotiation coaching session') and the resource ('session'), distinguishing it from siblings like 'add_issue' or 'analyze_deal' which operate on existing sessions rather than creating them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Call this first,' providing clear when-to-use guidance that this tool should be invoked before any other sibling tools, which all presumably require a session_id returned by this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_counteroffersAInspect

Generate 3 counteroffers that are equally good for the user but structured differently. Present ALL THREE simultaneously to the counterpart — their reaction reveals what they care about. target_satisfaction: 'ambitious', 'moderate', or 'conservative'.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYes
target_satisfactionNomoderate

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it generates exactly 3 offers, structures them differently while maintaining equal user benefit, presents them simultaneously, and uses the counterpart's reaction diagnostically. It doesn't mention side effects, permissions, or rate limits, but covers the core operational behavior adequately for a tool without annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: two dense sentences that cover purpose, methodology, and usage without any wasted words. Every phrase adds value, and the structure moves logically from what the tool does to how to use it strategically.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (strategic offer generation), no annotations, and the presence of an output schema (which handles return values), the description is nearly complete. It covers the core operation and strategy well but lacks information about the 'session_id' parameter and any error conditions or prerequisites. The output schema existence relieves it from explaining return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It fully explains the 'target_satisfaction' parameter with its three enum values ('ambitious', 'moderate', 'conservative'), adding crucial meaning beyond the schema's bare title. However, it doesn't mention the 'session_id' parameter at all, leaving one of two parameters undocumented in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Generate 3 counteroffers') and resource (counteroffers for negotiation), with explicit details about quantity, structure, and presentation method. It distinguishes from siblings like 'process_counterpart_response' or 'analyze_deal' by focusing on proactive offer generation rather than analysis or response handling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Present ALL THREE simultaneously to the counterpart — their reaction reveals what they care about.' This tells the agent when to use this tool (to test counterpart priorities through multiple structured offers) and implies it's for strategic revelation rather than just offer creation. It doesn't name alternatives but clearly defines the tactical context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_negotiation_stateBInspect

Get current state of the negotiation model — issues, weights, BATNA, frontier size, and what to do next.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool retrieves state information, implying it's a read-only operation, but doesn't disclose behavioral traits such as whether it requires authentication, has rate limits, returns real-time or cached data, or handles errors. The mention of 'what to do next' hints at advisory output, but this is vague.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and lists key components without unnecessary words. Every part earns its place by specifying what the tool retrieves, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which likely covers return values), the description doesn't need to explain outputs. However, with no annotations and a simple input schema, the description adequately states the purpose but lacks details on usage context, parameter meaning, and behavioral aspects, leaving gaps for a state-retrieval tool in a negotiation system.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the schema provides no semantic context. The description doesn't mention any parameters, failing to compensate for the low coverage. However, with only one parameter, the baseline is higher; the description implies a negotiation context but doesn't explain what 'session_id' represents or its format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get current state of the negotiation model' with specific components listed (issues, weights, BATNA, frontier size, and what to do next). It uses a specific verb ('Get') and identifies the resource ('negotiation model state'), but doesn't explicitly differentiate from sibling tools like 'analyze_deal' or 'create_session'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'what to do next' which implies a diagnostic use, but doesn't specify prerequisites (e.g., requires an existing session) or contrast with other tools like 'analyze_deal' for deeper analysis or 'create_session' for initialization.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

process_counterpart_responseBInspect

After counterpart reacts to 3 offers, process their response to infer priorities and generate a refined round-2 offer positioned on the efficient frontier.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYes
pushback_issuesNo
preferred_packageYes
counterpart_counterNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'processes' a response to 'infer priorities and generate a refined round-2 offer,' which suggests analysis and generation functions, but lacks details on permissions needed, rate limits, whether it modifies data (e.g., updates session state), or what the 'efficient frontier' entails. For a tool with 4 parameters and no annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and context. It's front-loaded with key information and avoids unnecessary words, though it could be slightly more detailed given the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no annotations, but has an output schema), the description is moderately complete. It explains the high-level purpose but lacks details on parameter meanings, behavioral traits, and how it interacts with sibling tools. The presence of an output schema reduces the need to describe return values, but more context is needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. The description mentions 'counterpart reacts to 3 offers' and 'process their response,' which loosely relates to parameters like 'preferred_package' or 'counterpart_counter,' but doesn't explain what 'session_id', 'pushback_issues', or 'preferred_package' mean in this context. It adds minimal semantic value beyond the schema titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'process their response to infer priorities and generate a refined round-2 offer positioned on the efficient frontier.' It specifies the action (process response), the context (after counterpart reacts to 3 offers), and the outcome (infer priorities, generate refined offer). However, it doesn't explicitly distinguish this tool from sibling tools like 'analyze_deal' or 'generate_counteroffers'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: 'After counterpart reacts to 3 offers,' implying this tool should be used specifically after that event. However, it doesn't explicitly state when NOT to use it or mention alternatives among sibling tools like 'analyze_deal' or 'generate_counteroffers' for similar analysis tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

record_preferenceBInspect

Record that the user prefers gains on preferred_issues over over_issues. Updates the internal priority weight model. Call this when you learn priorities through conversation.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYes
over_issuesYes
preferred_issuesYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool 'Updates the internal priority weight model,' implying a mutation, but doesn't disclose behavioral traits such as whether this requires specific permissions, if changes are reversible, potential side effects, or rate limits. This is a significant gap for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with two concise sentences: the first states the action and parameters, the second provides usage guidance. Every sentence earns its place without waste, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with 3 parameters), no annotations, and an output schema (which reduces the need to explain return values), the description is moderately complete. It covers purpose and usage but lacks details on parameters, behavioral transparency, and how it integrates with siblings, leaving gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter details. The description mentions 'preferred_issues' and 'over_issues' but doesn't explain their semantics (e.g., what constitutes an issue, format, or constraints) or the 'session_id' parameter. It adds minimal value beyond naming parameters, failing to compensate for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Record that the user prefers gains on preferred_issues over over_issues' and 'Updates the internal priority weight model.' It specifies the verb (record/update) and resource (priority weight model), but doesn't explicitly differentiate from sibling tools like set_batna or set_issue_range, which might also affect priorities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: 'Call this when you learn priorities through conversation.' This gives a specific trigger (learning priorities in conversation), but doesn't mention when not to use it or name alternatives among siblings like set_batna, which might also handle priority adjustments.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_batnaBInspect

Record the user's BATNA — their alternatives if the deal falls through. Determines leverage and anchoring strategy.

ParametersJSON Schema
NameRequiredDescriptionDefault
session_idYes
alternativesYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states 'Record' which implies a write/mutation operation, but doesn't disclose behavioral traits like whether this is idempotent, requires specific permissions, affects other data, or has side effects. The mention of 'determines leverage and anchoring strategy' hints at downstream impacts but lacks specifics on behavior or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: two concise sentences that directly state the tool's purpose and its strategic role. Every sentence earns its place with no wasted words, making it efficient for agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with 0% schema coverage, no annotations, but an output schema exists, the description is minimally adequate. It covers the core purpose and parameter semantics partially, but lacks usage guidelines and behavioral details. The output schema reduces need to explain return values, but for a mutation tool with no annotations, more context on behavior and constraints would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains 'alternatives' as 'the user's alternatives if the deal falls through', adding semantic meaning beyond the schema's bare 'Alternatives' title. However, it doesn't clarify 'session_id' or provide format/examples for 'alternatives' array items, leaving gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Record the user's BATNA' with the specific action 'record' and resource 'BATNA'. It adds context about purpose ('determines leverage and anchoring strategy'), but doesn't explicitly differentiate from sibling tools like 'record_preference' or 'set_issue_range' which might handle related negotiation data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the tool's purpose but doesn't indicate prerequisites, timing (e.g., during deal setup vs. ongoing negotiation), or when to choose it over siblings like 'record_preference' or 'add_issue'. This leaves the agent without clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_issue_rangeBInspect

Set the user's acceptable range for a numeric issue. Enables proper 0-100 scoring. E.g., worst_acceptable=150000, best_hoped=200000, option_values={'$150K': 150000, '$180K': 180000}.

ParametersJSON Schema
NameRequiredDescriptionDefault
best_hopedYes
issue_nameYes
session_idYes
option_valuesYes
worst_acceptableYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states the tool sets a range for scoring, implying a write operation, but doesn't disclose behavioral traits like whether changes are permanent, if it requires specific permissions, or what the output contains. The example helps but doesn't cover key operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: the first states purpose and benefit, the second gives a clear example. It's front-loaded and efficient, though the example could be slightly more concise by omitting the dollar signs in the option values if not essential.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with 0% schema coverage and no annotations, the description does well with the example but lacks context on mutation behavior, permissions, or error handling. The presence of an output schema (not shown) might help, but the description alone is minimally adequate with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides a concrete example that clarifies the meaning of 'worst_acceptable', 'best_hoped', and 'option_values' (as a mapping of labels to numeric values), adding significant value beyond the bare schema. This fully addresses the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Set') and resource ('user's acceptable range for a numeric issue'), and explains the purpose ('Enables proper 0-100 scoring'). It doesn't explicitly differentiate from siblings like 'add_issue' or 'record_preference', but the specific focus on range setting is reasonably distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'add_issue' or 'record_preference'. The example implies usage for numeric issues with scoring, but lacks explicit when/when-not instructions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.