Netfluid
Server Details
AI agent banking - fiat and crypto wallet management. Send payments, buy/sell crypto, fund via banks/PayShap/cards, withdraw globally. Virtual SEPA/ACH accounts for fiat on-ramps.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
134 toolsai__aboutAInspect
Returns general information about netfluid, domains in use, website addresses, policies, support contacts Start Here ! Everything Netfluid This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states this is a read-only informational tool ('Returns general information'), which is clear. However, it doesn't disclose any behavioral traits like authentication requirements, rate limits, or what specific format the information will be returned in beyond 'a json object containing the schema'. The description doesn't contradict any annotations since there are none.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with inconsistent formatting and capitalization ('tools' instead of 'tool', mixed punctuation). It contains redundant elements like '@return: a json object containing the schema' which is better handled by the output schema. The 'Start Here ! Everything Netfluid' could be integrated more smoothly. While brief, it lacks professional polish.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a simple informational tool with 0 parameters, no annotations, but has an output schema, the description provides adequate context. It explains what information is returned and suggests it's a starting point. The output schema will handle the return format details, so the description doesn't need to explain return values. However, it could better clarify the relationship to other informational tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description doesn't need to add parameter information since there are no parameters, and it correctly indicates this is a parameterless tool through its description of what it returns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool does: 'Returns general information about netfluid, domains in use, website addresses, policies, support contacts'. It specifies the verb ('returns') and resource ('general information about netfluid') with specific categories listed. However, it doesn't explicitly distinguish this from sibling tools like 'ai__help_ping' or 'ai__support' that might also provide informational content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with 'Start Here ! Everything Netfluid', indicating this should be the first tool to use when exploring the Netfluid system. It also mentions 'This tools provides reference information in the "referenced_tools" schema', which helps the agent understand the context of when to use this tool versus others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__access_pauseAInspect
Temporarily blocks access to the wallet
Temporarily blocks access to the wallet for all systems. This end point requires an api_key with administrator privileges. @param api_key: The api key with administrator privileges @param wallet_fk: The wallet_fk to pause
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a mutation tool (blocks access), requires specific permissions (administrator privileges), and affects all systems. It could improve by mentioning potential side effects (e.g., duration of pause, impact on users) or rate limits, but it covers the essential safety and authorization context well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by context and parameters. Every sentence adds value, with no redundant information. Minor improvements could include bullet points for parameters, but it remains efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation with security implications), no annotations, and an output schema present (which handles return values), the description is reasonably complete. It covers purpose, usage context, and parameter meanings. To be fully complete, it could mention error conditions or confirm the temporary nature more explicitly, but it meets most needs effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It adds meaningful semantics for both parameters: 'api_key' is described as 'The api key with administrator privileges' (clarifying privilege requirements), and 'wallet_fk' as 'The wallet_fk to pause' (indicating the target). This goes beyond the bare schema, though it could provide more detail on parameter formats or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('temporarily blocks access') and resource ('the wallet'), distinguishing it from siblings like 'ai__access_resume' (which presumably resumes access) and 'ai__account_pause' (which likely pauses accounts rather than wallet access). The verb 'blocks' is precise and the scope 'for all systems' clarifies the extent of the action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Temporarily blocks access to the wallet for all systems.' It also provides prerequisites: 'This end point requires an api_key with administrator privileges.' However, it does not explicitly mention when not to use it or name alternatives (e.g., 'ai__access_resume' for resuming access), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__access_platform_assignCInspect
Assigns this wallet_fk to a channel (discord, telegram, whatsapp) user id.
Assigns this wallet_fk to a channel (discord, telegram, whatsapp) user id. Where possible prompt the customer to do this, as it makes getting a session token much easier in the future @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| user_id | Yes | ||
| platform | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It mentions that assignment 'makes getting a session token much easier in the future,' hinting at future authentication benefits, but fails to disclose critical behavioral traits such as whether this is a read-only or destructive operation, required permissions, rate limits, or error handling. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy (the first two sentences are identical) and includes param annotations that could be streamlined. It front-loads the core purpose but wastes space on repetition. While not overly verbose, it lacks optimal efficiency in structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage and no annotations, the description partially compensates by explaining some parameters and hinting at usage benefits. An output schema exists, so return values need not be described. However, the gaps in parameter documentation and behavioral transparency make it incomplete for a tool with multiple required inputs and no annotation support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists three parameters (api_key, token, wallet_fk) with brief explanations (e.g., 'The api key allocated to your application'), but omits two required parameters (user_id, platform) entirely. This incomplete coverage fails to adequately explain parameter meanings beyond the schema, leaving key inputs undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('assigns') and the resources involved ('wallet_fk to a channel user id'), specifying the channels as discord, telegram, and whatsapp. However, it does not explicitly differentiate from sibling tools like 'ai__access_platform_login' or 'ai__access_platform_wallet_list', which might handle related platform interactions, leaving some ambiguity in sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a guideline to 'prompt the customer to do this, as it makes getting a session token much easier in the future,' which implies a preferred usage context. However, it does not specify when to use this tool versus alternatives (e.g., 'ai__access_platform_login' for login vs. assignment) or any exclusions, providing only implied rather than explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__access_platform_loginAInspect
Retrieves a session token based on a wallet_fk and 5-digit (numeric) PIN. This is a shortcut to the session token.
Retrieves a session token based on a wallet_fk and 5-digit (numeric) PIN Only 1 attempt is allowed, get the PIN wrong and you have to send the user back to the website to get a session key This works on any wallet_fk and PIN combination, so if you have a wallet_fk, but no session token, it's a shortcut to the session token. @param wallet_fk: The wallet_fk @param pin: The 5-digit numerical pin associated with this wallet_fk
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| pin | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the one-attempt limit and the consequence of failure (user must return to website). It also clarifies the tool's scope ('works on any wallet_fk and PIN combination') and return type ('a json object'). However, it doesn't cover potential error responses, rate limits, or authentication requirements beyond the PIN.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but contains redundant repetition ('Retrieves a session token...' appears twice). The @param/@return annotations are helpful but could be integrated more smoothly. Overall, it conveys necessary information but could be more streamlined without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter authentication tool with no annotations but an output schema, the description provides good context. It covers purpose, usage constraints, parameter basics, and return type. The output schema likely details the 'json object' structure, so the description doesn't need to explain return values. However, it lacks details on error handling or security implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description compensates by defining both parameters: 'wallet_fk: The wallet_fk' and 'pin: The 5-digit numerical pin associated with this wallet_fk.' It adds crucial semantics for 'pin' (5-digit, numeric) but provides minimal context for 'wallet_fk' (just repeats the name). Given the coverage gap, this is adequate but not comprehensive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieves a session token based on a wallet_fk and 5-digit (numeric) PIN.' It specifies the verb ('retrieves'), resource ('session token'), and input requirements. However, it doesn't explicitly differentiate this from sibling tools like 'ai__session_2_token' or 'ai__session', which might offer alternative ways to obtain session tokens.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'if you have a wallet_fk, but no session token, it's a shortcut to the session token.' It also includes a critical constraint: 'Only 1 attempt is allowed, get the PIN wrong and you have to send the user back to the website to get a session key.' This offers practical guidance, though it doesn't explicitly compare to alternatives like 'ai__session_2_token' or mention prerequisites beyond having the wallet_fk and PIN.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__access_platform_wallet_listAInspect
Retrieves a list of wallets associated with the channel (discord, telegram or whatsapp) and user id
Retrieves a list of wallets associated with the channel (discord, telegram or whatsapp) user id, once you have a wallet_fk you can get a session token. This only works if the wallet_fk has been previously assigned to this channel and user id. @param user_id: The chat, channel user-id specific to this chat customer @param platform: The channel platform: discord, telegram, whatsapp
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| user_id | Yes | ||
| platform | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does reveal important constraints: the tool only works if wallets have been previously assigned to the specific channel and user ID, and it's part of a workflow to obtain session tokens. However, it doesn't disclose whether this is a read-only operation, what authentication might be required, potential rate limits, or error conditions. The description adds some context but leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has some redundancy: the first sentence is repeated verbatim. The information is front-loaded with the core purpose, followed by usage context and parameter details. However, the repetition wastes space, and the structure could be more streamlined by eliminating the duplicate opening sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, but has output schema), the description provides adequate coverage. It explains the purpose, usage context, and parameter meanings. Since an output schema exists, the description doesn't need to explain return values. The main gap is the lack of behavioral details like authentication requirements or error handling, but the core functionality is sufficiently described for an agent to understand when and how to use this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 2 parameters, the description provides essential semantic context through @param annotations. It explains that 'user_id' is 'The chat, channel user-id specific to this chat customer' and 'platform' is 'The channel platform: discord, telegram, whatsapp'. This adds meaningful interpretation beyond the bare schema types (both strings). However, it doesn't provide format examples, constraints, or validation rules for these parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieves a list of wallets associated with the channel (discord, telegram or whatsapp) and user id'. It specifies the verb ('retrieves'), resource ('wallets'), and scope ('associated with the channel and user id'). However, it doesn't explicitly differentiate from sibling tools like 'ai__wallet_accounts_list' or 'ai__wallet_accounts_list_verbose', which appear to have similar wallet-related functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'once you have a wallet_fk you can get a session token' and 'This only works if the wallet_fk has been previously assigned to this channel and user id'. It implies this is a prerequisite step for obtaining session tokens. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__access_recoverBInspect
Recovers the Netfluid wallet's private key from a set of 24 keywords
Recovers the Netfluid wallet's private key from a set of 24 keywords. Returns an object with the wallet owner's key, first_name, last_name, mobile and email. It's preferable that the person trying to recover @param api_key: The api key allocated to your application @param words: A set of 24 recovery words, space delimited
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| words | Yes | ||
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses that the tool returns an object with specific fields (key, first_name, etc.), which adds useful context beyond the input schema. However, it lacks details on security implications (e.g., sensitivity of private key recovery), error handling, or rate limits, which are important for a recovery operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy (the first two sentences are nearly identical). It is front-loaded with the core purpose, but the '@param' and '@return' annotations are informal and could be integrated more smoothly. Some sentences, like 'It's preferable that the person trying to recover', are vague and add little value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (recovery operation with security implications), no annotations, and an output schema exists (implied by '@return'), the description is fairly complete. It explains the action, parameters, and return structure. However, it misses behavioral details like authentication needs or potential side effects, which are crucial for such a tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantic meaning for both parameters: 'api_key' is 'allocated to your application' and 'words' are '24 recovery words, space delimited'. This clarifies the format and purpose, though it could add more on constraints (e.g., word list source). With 0% coverage, this is strong but not exhaustive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Recovers the Netfluid wallet's private key from a set of 24 keywords.' It specifies the verb ('recovers'), resource ('private key'), and mechanism ('from a set of 24 keywords'). However, it does not explicitly differentiate from sibling tools like 'ai__access_pause' or 'ai__access_resume', which appear to be related to access control but have different functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions it's 'preferable that the person trying to recover' but does not specify prerequisites, exclusions, or compare it to siblings like 'ai__access_platform_wallet_list' or 'ai__wallet_mnemonic'. This leaves the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__access_resumeBInspect
Temporarily blocks access to the wallet
Resumes access to a paused wallet. This end point requires an api_key with administrator privileges. @param api_key: The api key with administrator privileges @param wallet_fk: The wallet_fk to resume
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a mutation operation (resumes access), requires specific permissions (admin privileges via api_key), and acts on a specific resource (paused wallet). It doesn't mention side effects, rate limits, or response details, but covers essential mutation and auth context adequately for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the misleading first sentence, which wastes space. After that, it efficiently states the purpose, requirements, and parameters. However, the structure is slightly awkward due to the initial error, and it could be more streamlined by removing the contradictory opening line.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with admin requirements), no annotations, 0% schema coverage, but with an output schema (indicated by @return and context signals), the description does well. It explains the action, permissions, and parameters. The output schema handles return values, so the description doesn't need to detail them. It's mostly complete but could benefit from clarifying the misleading first part.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description compensates fully by explaining both parameters: 'api_key: The api key with administrator privileges' and 'wallet_fk: The wallet_fk to resume.' This adds crucial meaning beyond the bare schema types, clarifying the purpose and constraints of each parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Temporarily blocks access to the wallet' which is misleading and contradicts the actual purpose stated in the second sentence: 'Resumes access to a paused wallet.' The first sentence appears to be a copy-paste error from a sibling tool (likely ai__access_pause). While the second sentence correctly identifies the verb ('resumes') and resource ('a paused wallet'), the initial misleading statement reduces clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by stating 'This end point requires an api_key with administrator privileges,' which implies when to use it (when admin privileges are available). However, it doesn't explicitly differentiate from siblings like ai__access_pause (which likely pauses instead of resumes) or ai__account_resume (which might resume accounts rather than wallet access). The guidance is implied but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__access_wallet_pin_changeBInspect
Changes the wallet PIN, must be 5 digits, e.g. 12345
Changes the wallet PIN, must be 5 digits, e.g. 12345 @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param new_pin: The new PIN, but be 5 numeric digits
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| new_pin | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It states this is a change/mutation operation (implied behavioral trait) and specifies the PIN format constraint. However, it doesn't disclose critical behavioral aspects: whether this requires authentication beyond the parameters, if it's reversible, rate limits, error conditions, or what specific changes occur in the system. For a security-sensitive operation like PIN change, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has structural issues: the first two lines are redundant duplicates, and parameter documentation uses @param/@return syntax that's somewhat verbose. The core information is front-loaded, but the repetition wastes space. It could be more efficiently structured without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a mutation tool with no annotations, 4 parameters, and an output schema exists (so return values are documented elsewhere), the description is moderately complete. It covers the basic operation and parameters but lacks important context about security implications, error handling, and system behavior changes that would be crucial for safe PIN modification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining all 4 parameters: api_key ('allocated to your application'), token and wallet_fk ('provided by /access/login'), and new_pin ('must be 5 numeric digits'). It adds meaningful context about parameter origins and constraints that the bare schema doesn't provide, though it could elaborate more on format expectations beyond '5 digits'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Changes the wallet PIN' with the specific constraint 'must be 5 digits'. This is a specific verb+resource combination that distinguishes it from siblings like wallet-related tools that don't modify PINs (e.g., wallet_accounts_list, wallet_verify). However, it doesn't explicitly differentiate from potential PIN-related siblings that might exist in other contexts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions prerequisites (api_key, token, wallet_fk from /access/login) but doesn't indicate scenarios where PIN change is appropriate versus other wallet operations or what might happen if used incorrectly. No sibling tools are referenced for comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__accountBInspect
Returns public information associated with an account_fk: address, commodity, currency and type @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states this returns 'public information' which implies read-only access, but doesn't specify authentication requirements, rate limits, or error conditions. The @return annotation adds some value by indicating JSON output format, but behavioral details are minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief with two sentences: one stating the tool's purpose and parameters, another specifying the return format. No unnecessary information is included, though the @return notation could be integrated more smoothly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return value documentation), 1 parameter with 0% schema coverage, and no annotations, the description provides adequate context. It explains what the tool does, what parameter it takes, and the return format, though more behavioral details would be helpful given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and only 1 parameter, the description adds significant value by explaining that 'account_fk' is used to retrieve public account information. It doesn't specify the format or constraints of account_fk beyond being an integer, but provides clear semantic context for the single parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns public information associated with an account_fk' with specific fields listed (address, commodity, currency, type). It distinguishes from siblings like ai__account_info by specifying 'public information' and listing exact fields, though it doesn't explicitly contrast with other account-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With many sibling tools like ai__account_info, ai__account_address, and ai__accounts, the description doesn't indicate when this specific tool is appropriate versus those other account information tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_addressCInspect
Validates and verifies the existence of a Netfluid account address (not blockchain address) @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| account_address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions validation and verification but doesn't disclose behavioral traits such as what happens on success/failure, whether it's a read-only operation, authentication requirements, rate limits, or error conditions. The mention of '@return: a json object' is minimal and doesn't add meaningful context beyond what the output schema likely provides.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose, followed by a minimal return statement. It avoids unnecessary verbosity, though the '@return' note could be more integrated. Overall, it's efficiently structured with little waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which likely details the return object), the description doesn't need to explain return values. However, with no annotations, 0% schema coverage for the single parameter, and complexity around validation/verification, the description is incomplete—it lacks details on behavior, error handling, and parameter specifics, making it only minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for the undocumented parameter 'account_address'. The description only states it's for a 'Netfluid account address (not blockchain address)', which adds some semantic meaning (clarifying it's not a blockchain address) but doesn't explain format, constraints, or examples. This partial compensation is insufficient for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Validates and verifies the existence of a Netfluid account address' which provides a clear verb ('validates and verifies') and resource ('Netfluid account address'). However, it doesn't distinguish this from potential sibling tools like 'ai__account' or 'ai__account_info' that might also handle account-related operations, making it somewhat vague about its specific niche.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare it to sibling tools like 'ai__account' or 'ai__account_info', leaving the agent with no contextual usage information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_aliasBInspect
Sets a friendly name/alias on an account. Confirm (yes/no) before executing
Sets a friendly name/alias on an account_fk. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk to charge for the mint fee @param alias: The alias or friendly name, try and use unique names for each account in the wallet
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| alias | Yes | ||
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a mutation tool ('Sets'), includes a caution ('Confirm before executing'), and hints at uniqueness ('try and use unique names'). However, it lacks details on permissions, side effects, error handling, or rate limits. It adds some context but falls short of fully describing behavioral traits for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat structured with a purpose statement, caution, and parameter explanations, but it is repetitive (e.g., 'Sets a friendly name/alias on an account' appears twice) and includes informal notes ('try and use unique names'). It could be more streamlined and front-loaded for clarity, with some sentences not earning their place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with 4 parameters), no annotations, and an output schema present (which handles return values), the description is moderately complete. It covers purpose, basic usage caution, and parameter semantics, but lacks behavioral details like error cases or security requirements. It is adequate but has clear gaps for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description adds semantic information for all four parameters, explaining their purposes (e.g., 'api_key: The api key allocated to your application'). However, it does not specify formats, constraints, or examples beyond basic explanations, leaving gaps in practical usage guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Sets a friendly name/alias on an account.' This specifies the verb ('Sets') and resource ('friendly name/alias on an account'), making it easy to understand. However, it does not explicitly differentiate from sibling tools like 'ai__account_rename' or 'ai__bridge_rename', which might have similar renaming functions, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline: 'Confirm (yes/no) before executing,' which implies a cautionary step for this mutation operation. However, it does not specify when to use this tool versus alternatives (e.g., other account modification tools), nor does it detail prerequisites or exclusions. The guidance is implied but incomplete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_buyCInspect
Purchases a digital asset from the account's FIAT balance. Confirm (yes/no) before executing
Purchases a digital asset from the account's FIAT balance @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to buy @param digital_asset_fk: The digital_asset_fk to buy
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| digital_asset_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a confirmation step, which adds some context, but fails to disclose critical traits such as whether this is a destructive/mutative operation (implied by 'purchases'), authentication needs (only hinted via parameters), rate limits, error handling, or what the JSON return object contains. This is inadequate for a financial transaction tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: it repeats the first sentence verbatim, and includes parameter annotations in a non-standard format that could be confusing. While it's front-loaded with the main purpose, the repetition and informal @param/@return tags reduce clarity and efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a financial purchase tool with 4 parameters, 0% schema coverage, no annotations, and an output schema (which helps but isn't described), the description is incomplete. It lacks details on behavioral aspects, error cases, and doesn't fully explain parameters or return values, making it insufficient for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists parameters with brief explanations (e.g., 'The api key allocated to your application'), but these add minimal semantic value beyond naming. Key details like parameter formats, constraints, or how to obtain values (e.g., 'account_fk from which to buy') are missing, leaving significant gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Purchases a digital asset') and the resource ('from the account's FIAT balance'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'ai__account_sell' or 'ai__account_swap', which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline ('Confirm (yes/no) before executing'), which provides implied context for when to use it (i.e., after confirmation). However, it lacks explicit guidance on when to use this tool versus alternatives like 'ai__account_buy_telco' or other purchase-related siblings, and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_buy_telcoBInspect
Purchases airtime by converting currency to minutes and seconds. Confirm (yes/no) before executing
Purchases airtime by converting currency to minutes and seconds. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to deduct the amount @param amount: The amount in currency, 2 decimals @param currency_code: The currency code, e.g USD, ZAR, BWP, ZIG
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| currency_code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden of behavioral disclosure. It mentions a confirmation step and that the tool performs a purchase (implying a write/mutation operation), but it lacks details on permissions, rate limits, error handling, or what the purchase entails (e.g., is it irreversible?). For a financial transaction tool with no annotations, this is insufficient, though it does add some context beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat redundant, repeating 'Purchases airtime by converting currency to minutes and seconds.' twice, which wastes space. It's front-loaded with the purpose, but the parameter explanations are listed in a block without clear structuring. Overall, it's moderately concise but could be improved by removing repetition and better organizing the information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (financial transaction tool), empty annotations, 0% schema coverage, but presence of an output schema (which handles return values), the description is partially complete. It covers purpose, parameters, and a confirmation step, but lacks behavioral details like safety, permissions, or error handling. The output schema mitigates some gaps, but for a tool with no annotations, more context is needed to be fully adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides parameter semantics for all 5 parameters (e.g., 'api_key: The api key allocated to your application', 'amount: The amount in currency, 2 decimals'), adding meaning not present in the schema. However, it doesn't explain relationships between parameters (e.g., how 'account_fk' relates to 'token') or provide examples beyond currency codes, so it's not a perfect 5.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Purchases airtime by converting currency to minutes and seconds.' This specifies the verb ('purchases'), resource ('airtime'), and mechanism ('converting currency to minutes and seconds'). However, it doesn't explicitly differentiate from sibling tools like 'ai__account_buy' or 'ai__account_charge', which might have overlapping financial transaction purposes, so it's not a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline: 'Confirm (yes/no) before executing,' which implies a confirmation step is needed. However, it doesn't provide explicit guidance on when to use this tool versus alternatives (e.g., compared to 'ai__account_buy' or other telco-related tools), nor does it specify prerequisites or exclusions. This leaves usage context implied rather than clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_chargeAInspect
Charges the account based on a presented QR-code or NFC card. Confirm (yes/no) before executing
Charges the account based on a presented QR-code or NFC card. The payer must present a valid PIN for the transaction to complete @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The merchant's account_fk, typically the account that will receive the payment @param account_address: The account_address that will be charged @param amount: The amount to charge @param pin: The account PIN of the account_address that will be charged @param note: The note on the transaction
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| pin | Yes | ||
| note | No | ||
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| account_address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the need for user confirmation ('Confirm (yes/no) before executing'), authentication requirements ('The payer must present a valid PIN'), and that it's a transactional operation ('Charges the account'). However, it doesn't mention potential side effects like rate limits, idempotency, or error handling, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy: the first two sentences are nearly identical. The parameter list is clear but could be more integrated. It's front-loaded with the core purpose, but the structure feels slightly disjointed with repetitive lines and a separate param block.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial transaction with 7 params, no annotations), the description does a decent job: it explains the purpose, key behavior (confirmation, PIN), and parameters. With an output schema present, it doesn't need to detail return values. However, it misses some context like error cases or idempotency, which would be helpful for a charge operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all 7 parameters with brief explanations (e.g., 'The amount to charge'), adding meaning beyond the bare schema. However, explanations are minimal and don't cover details like formats, constraints, or relationships between parameters (e.g., how account_fk relates to account_address). This provides basic semantics but lacks depth.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Charges the account based on a presented QR-code or NFC card.' It specifies the verb ('charges') and resource ('account'), and distinguishes it from siblings like ai__account_pay or ai__account_send by focusing on QR/NFC-based charging. However, it doesn't explicitly differentiate from all payment-related siblings, keeping it at 4 rather than 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Confirm (yes/no) before executing' and mentions the payer must present a valid PIN. It implies this is for QR/NFC transactions but doesn't explicitly state when to use this versus alternatives like ai__account_pay or ai__account_send, nor does it mention prerequisites or exclusions. This gives implied guidance but lacks explicit alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_infoCInspect
Returns detailed account information.
Returns detailed account information and balances. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool returns information, implying it's a read-only operation, but doesn't disclose behavioral traits like authentication needs (though parameters hint at it), rate limits, or what 'detailed' entails. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat repetitive ('Returns detailed account information.' is duplicated) and includes param annotations that could be structured better. It's front-loaded with the purpose, but the repetition and inline param details reduce efficiency, though it's not overly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (implied by 'Has output schema: true'), the description doesn't need to explain return values. However, with no annotations, 3 parameters at 0% schema coverage, and complexity in sibling tools, the description adds some param semantics but lacks behavioral context, making it minimally adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides param annotations explaining each parameter's purpose and source (e.g., 'api_key: The api key allocated to your application'), adding meaningful context beyond the bare schema. However, it doesn't cover all potential semantics like format constraints, so it's not a perfect 5.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'returns detailed account information and balances,' which provides a clear verb ('returns') and resource ('account information'). However, it doesn't differentiate from sibling tools like 'ai__account' or 'ai__accounts,' which likely serve similar purposes, making the purpose somewhat vague in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives. With many sibling tools related to accounts (e.g., 'ai__account', 'ai__accounts', 'ai__account_statement'), there's no indication of context or exclusions, leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_merchant_voucher_issueCInspect
Issues a Netfluid voucher. Performs a FIAT withdraw. Only applicable on FIAT balances
@param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk into which to redeem the voucher @param amount: The amount to issue for
@return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'Performs a FIAT withdraw' which implies a financial transaction, but doesn't disclose critical behavioral traits like whether this is irreversible, what permissions are needed, potential fees, rate limits, or what happens if the FIAT balance is insufficient. The description is too sparse for a financial transaction tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with the main purpose stated first. However, the parameter documentation uses @param/@return syntax which is redundant with the schema, and the overall structure could be more front-loaded with critical behavioral information. The sentences earn their place but could be more efficiently organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial transaction tool with 4 parameters, 0% schema description coverage, no annotations, and an output schema (though not shown), the description is incomplete. It covers basic purpose and parameters but lacks critical context about transaction behavior, error conditions, security requirements, and relationship to sibling tools. The presence of an output schema helps, but doesn't compensate for the missing behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all 4 parameters with brief explanations, but these add minimal semantic value beyond the parameter names themselves. For example, 'api_key: The api key allocated to your application' doesn't explain format or where to obtain it. The parameter explanations are too basic to adequately document a financial transaction tool with 0% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Issues a Netfluid voucher') and resource ('Performs a FIAT withdraw'), with additional context about applicability ('Only applicable on FIAT balances'). It distinguishes from sibling tools like 'ai__account_merchant_voucher_redeem' by specifying the 'issue' action, but doesn't explicitly contrast with other financial tools like 'ai__withdraw' or 'ai__account_send'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions 'Only applicable on FIAT balances' which gives some context, but doesn't specify when to use this tool versus alternatives like 'ai__withdraw' or 'ai__account_send', nor does it mention prerequisites or constraints beyond the FIAT balance requirement. No explicit when/when-not guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_merchant_voucher_quoteBInspect
Performs a quote to before issuing a Netfluid voucher as a merchant.
STEP 1: Performs a quote to issue a voucher as a merchant. Returns the merchant commission as well as the amount charged to the customer. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to issue the voucher @param amount: The amount to issue the voucher for in up to 2 decimals.
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It mentions the tool performs a quote and returns specific data (merchant commission and amount charged), which provides basic behavioral context. However, it lacks details on potential side effects (e.g., if this quote is cached or affects limits), authentication requirements beyond parameters, rate limits, or error conditions. The description does not contradict annotations, but it is insufficient for a mutation-like operation with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues. It repeats the purpose in two slightly different sentences ('Performs a quote to before issuing...' and 'STEP 1: Performs a quote...'), which is redundant. The parameter explanations are clear but could be more efficiently integrated. Overall, it conveys necessary information but with some verbosity and awkward phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a financial quote operation with 4 parameters), no annotations, and an output schema present, the description is reasonably complete. It explains the purpose, parameters, and return value ('a json object' with commission and charged amount details). The output schema likely covers return structure, so the description doesn't need to elaborate further. However, it could improve by addressing behavioral aspects like idempotency or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantic meaning for all four parameters: api_key ('allocated to your application'), token ('wallet_api_token provided by /access/login'), account_fk ('from which to issue the voucher'), and amount ('to issue the voucher for in up to 2 decimals'). This adds crucial context beyond the bare schema types, though it could be more detailed (e.g., format examples or constraints).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Performs a quote to before issuing a Netfluid voucher as a merchant' and 'Returns the merchant commission as well as the amount charged to the customer.' This specifies the verb (quote), resource (Netfluid voucher), and outcome (commission and charged amount). It distinguishes from sibling 'ai__account_merchant_voucher_issue' by indicating this is a preliminary quote step rather than the actual issuance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through phrases like 'before issuing' and 'as a merchant,' suggesting this is a preparatory step for voucher issuance. However, it does not explicitly state when to use this tool versus alternatives like 'ai__account_merchant_voucher_issue' or other quote-related tools, nor does it mention prerequisites or exclusions beyond the parameters listed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_merchant_voucher_redeemBInspect
Redeems a Netfluid voucher.
@param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk into which to redeem the voucher @param voucher_code: The netfluid voucher code, a string consisting of 4 sets of numbers example 1234-0000-4321-5678
@return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| voucher_code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'redeems' implies a write/mutation operation, the description doesn't specify whether this is idempotent, what happens on duplicate redemption attempts, what permissions are required, or what side effects occur. The description mentions authentication parameters but doesn't explain their purpose or how to obtain them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise with a clear purpose statement followed by parameter documentation. However, the structure could be improved by separating the purpose from parameter details more clearly. The @param/@return formatting is helpful but somewhat technical. The description earns its place but isn't optimally structured for quick scanning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a mutation tool with no annotations but with an output schema, the description provides adequate basic information. The parameter documentation is strong, but behavioral aspects are under-specified. The presence of an output schema means the description doesn't need to explain return values, but it should provide more context about the operation's effects and constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides crucial parameter documentation that the schema lacks. Each parameter gets a clear explanation: api_key is 'allocated to your application', token comes from '/access/login', account_fk specifies 'into which to redeem', and voucher_code includes a helpful format example. This compensates well for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'redeems' and the resource 'Netfluid voucher', making the purpose immediately understandable. However, it doesn't distinguish this tool from its sibling 'ai__account_merchant_voucher_issue' or 'ai__account_merchant_voucher_quote', which appear to be related voucher operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'ai__account_merchant_voucher_issue' and 'ai__account_merchant_voucher_quote', there's no indication of when redemption is appropriate versus issuing or quoting vouchers. The description also doesn't mention prerequisites beyond the listed parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_mintBInspect
Mints a new account of an account type into the wallet. Confirm (yes/no) before executing
Mints a new account of an account type into the wallet. There are costs associated with this operation. Minting is generally done asynchronously, it may take several seconds for the minted account to be available @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk into which the new account will be minted. @param account_type_fk: The account_type_fk to mint as per /account/types, default is Solana (7) @param currency_fk: The currency_fk to mint as per /currency/list, default is ZAR (7)
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| currency_fk | No | ||
| account_type_fk | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond basic functionality: it warns of costs, describes the asynchronous nature ('may take several seconds'), and notes the need for confirmation. This provides useful behavioral insights, though it could detail more about error handling or permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat repetitive (first two sentences are similar) and could be more streamlined. It front-loads key information but includes redundant phrasing. Overall, it is moderately concise but has room for improvement in structure to eliminate waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a minting operation with 5 parameters, no annotations, and an output schema present, the description provides basic context like costs and asynchronicity. However, it lacks details on prerequisites, error cases, or what the 'json object' return entails, making it adequate but with clear gaps for informed tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists parameters with brief explanations (e.g., 'api_key: The api key allocated to your application'), but these are minimal and do not fully clarify semantics like format or constraints. For 5 parameters, this adds some value but is insufficient to bridge the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'mints' and the resource 'a new account of an account type into the wallet', making the purpose specific. However, it does not explicitly distinguish this tool from sibling tools like 'ai__account' or 'ai__account_types', which could provide context on account creation or types, though the action 'mint' is distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a cautionary note to 'Confirm (yes/no) before executing' and mentions costs and asynchronous behavior, which implies usage context. However, it does not explicitly state when to use this tool versus alternatives like 'ai__account' or provide clear exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_pairAInspect
Assigns a PIN to an account for use by a QR-Code or NFC card. Confirm (yes/no) before executing
Assigns a PIN to an account for use by a QR-Code or NFC card @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param address: The account address @param pin: The 5 digit numeric PIN e.g 01234 @param expire_date: The optional expiry date on the PIN, format YYYY-MM-DD HH:MM:SS, in GMT timezone
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| pin | Yes | ||
| token | Yes | ||
| address | Yes | ||
| api_key | Yes | ||
| expire_date | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It mentions a confirmation step ('Confirm (yes/no) before executing'), which adds behavioral context beyond basic function. However, it lacks details on permissions, rate limits, or potential side effects (e.g., whether this overwrites an existing PIN). The description adds some value but is not fully transparent about behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat repetitive, starting with a sentence that is then repeated verbatim, which wastes space. The parameter explanations are clear but could be more integrated. It is front-loaded with the main purpose, but the repetition and lack of tight structure reduce efficiency, making it adequate but not optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage and an output schema (indicated by '@return: a json object'), the description does well by explaining all parameters and the return type. However, it lacks details on error cases or the structure of the JSON response. For a tool with no annotations and complex inputs, it is mostly complete but could benefit from more behavioral or output context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantic details for all parameters: 'api_key' (allocated to application), 'token' (from /access/login), 'address' (account address), 'pin' (5-digit numeric), and 'expire_date' (optional, with format and timezone). This adds meaningful context beyond the bare schema, though it could specify constraints like PIN length more explicitly (e.g., exactly 5 digits).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Assigns a PIN to an account for use by a QR-Code or NFC card.' It specifies the verb ('assigns'), resource ('account'), and context ('for use by QR-Code or NFC card'), making the purpose unambiguous. However, it does not explicitly differentiate from sibling tools like 'ai__access_wallet_pin_change', which might be a related PIN operation, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline: 'Confirm (yes/no) before executing,' which implies a confirmation step is needed. However, it does not specify when to use this tool versus alternatives (e.g., 'ai__access_wallet_pin_change' for changing an existing PIN) or provide explicit exclusions. The guidance is implied but not comprehensive, falling short of explicit alternatives or detailed context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_pauseAInspect
Temporarily restricts an account from performing any function that would result in funds existing the account. Can only be undone by an administrator, warn before using! Confirm (yes/no) before executing
Temporarily restricts an account from performing any function that would result in funds existing the account. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list @param lock: If set to 1 the account will be system locked, the customer cannot remove lock
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| lock | No | ||
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses critical behavioral traits: the restriction is temporary, requires admin reversal, involves a confirmation step, and prevents fund exits. It also explains the 'lock' parameter's effect (system lock preventing customer removal). However, it doesn't mention authentication needs beyond parameters, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: it repeats the first sentence verbatim, creating redundancy. The content is front-loaded with purpose and warnings, but the parameter explanations are somewhat terse. It could be more streamlined without losing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a mutation tool with no annotations, 4 parameters (3 required), 0% schema coverage, but with an output schema (returning JSON), the description does well. It covers purpose, warnings, parameters, and behavioral context. The output schema handles return values, so the description doesn't need to explain them. It's mostly complete for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantic meaning for all 4 parameters: 'api_key' as application key, 'token' as wallet API token from login, 'account_fk' as account identifier from accounts list, and 'lock' as system lock setting. This adds significant value beyond the bare schema, though it doesn't detail formats or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Temporarily restricts an account from performing any function that would result in funds exiting the account.' It specifies the action (restricts) and resource (account), but doesn't explicitly differentiate from siblings like 'ai__account_resume' or 'ai__account_unlock' beyond mentioning it's temporary and requires admin reversal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Can only be undone by an administrator, warn before using! Confirm (yes/no) before executing.' This gives explicit warnings and prerequisites, though it doesn't directly compare to alternatives like 'ai__account_resume' for reversal or specify when to use this over other account management tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_payCInspect
Charges the account a transaction fee. Confirm (yes/no) before executing
Charges the account a transaction fee. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk to charge @param amount: The amount to charge @param note: The note on the transaction
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| note | No | ||
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the need for confirmation, which adds some context, but fails to describe critical traits such as whether this is a destructive operation, authentication requirements beyond parameters, rate limits, or error handling. For a financial transaction tool, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and a usage note, but it repeats the first sentence unnecessarily and includes parameter annotations in a verbose format. While not overly long, it could be more streamlined by eliminating redundancy and integrating parameter info more efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial transaction with 5 parameters), no annotations, and an output schema (which reduces need to describe returns), the description is partially complete. It covers the basic action and parameters but lacks details on behavior, error cases, and sibling differentiation, making it minimally adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate by explaining parameters. It lists each parameter with brief notes (e.g., 'The api key allocated to your application'), but these are minimal and don't add substantial meaning beyond the schema's type definitions. For 5 parameters with no schema descriptions, this is inadequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Charges') and resource ('the account a transaction fee'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'ai__account_charge' which might have similar functionality, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline ('Confirm (yes/no) before executing'), which provides implied context for when to use this tool. However, it lacks explicit guidance on when to choose this tool over alternatives like 'ai__account_charge' or prerequisites, leaving room for improvement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_resumeAInspect
Resumes full trading capability for an account that has been previously paused. Can only be undone by an administrator. Confirm (yes/no) before executing
Resumes full trading capability for an account that has been previously paused. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the action is irreversible (administrator-only undo) and requires confirmation, which are critical behavioral traits. However, it lacks details on permissions needed, rate limits, or error handling, leaving gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with key information but has redundancy (the first sentence is repeated verbatim). The parameter explanations are helpful, but the repetition wastes space. Overall, it's adequately sized but could be more efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a mutation tool with no annotations, 0% schema coverage, and an output schema (implied by '@return'), the description does well: it explains purpose, irreversible nature, confirmation need, and parameter semantics. It covers essential context, though more behavioral details (e.g., auth requirements) would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics for all three parameters: 'api_key' as allocated to the application, 'token' as from /access/login, and 'account_fk' as from /wallet/accounts_list. This clarifies sources and purposes beyond basic schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Resumes') and resource ('full trading capability for an account'), specifying it applies to previously paused accounts. It distinguishes from siblings like 'ai__account_pause' by indicating the opposite action, though it doesn't explicitly mention all alternatives like 'ai__access_resume'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: use for resuming a paused account, with a warning about irreversibility ('Can only be undone by an administrator') and a confirmation prompt. It implies usage by distinguishing from pause operations but doesn't explicitly name when-not-to-use scenarios or all sibling alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__accountsCInspect
Provides tools to create and manage wallet accounts. This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the burden. It mentions 'Provides tools to create and manage' and 'reference information,' implying it might be a meta-tool or informational, but doesn't disclose behavioral traits like permissions needed, side effects, or rate limits. The @return note adds some context about output format, but overall, it lacks details on how it behaves operationally beyond basic intent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief with three sentences, but it's not well-structured or front-loaded. The first sentence is clear but vague, the second is confusing ('This tools provides reference information in the "referenced_tools" schema'), and the third is a technical note. It could be more concise and organized, with some sentences not earning their place due to awkward phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (managing wallet accounts), empty annotations, 0% schema coverage, but an output schema exists, the description is moderately complete. It hints at functionality and output format, but lacks details on behavior, error handling, or how it integrates with sibling tools. The output schema reduces the need to explain return values, but more context is needed for a tool with such a broad scope among many siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but there's only one parameter (api_key). The description doesn't add specific meaning to this parameter, but with zero parameters being the baseline for a single required parameter, it's acceptable. However, it doesn't explain what the api_key is for or how it's used, so it doesn't fully compensate for the lack of schema descriptions, keeping it from a perfect score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Provides tools to create and manage wallet accounts,' which gives a general purpose but is vague about what specific actions it performs. It doesn't distinguish itself from sibling tools like 'ai__account' or 'ai__wallet_accounts_list,' making it unclear if this is a meta-tool, a management interface, or something else. The phrase 'Provides tools' is ambiguous rather than specifying a clear verb+resource combination.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. With many sibling tools related to accounts and wallets (e.g., 'ai__account', 'ai__wallet_accounts_list'), the description fails to indicate if this is for high-level management, creation only, or reference purposes. The mention of 'reference information' hints at usage but doesn't provide explicit when/when-not instructions or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_sellCInspect
Sells a digital asset and returns the proceeds to the account's FIAT balance. Confirm (yes/no) before executing
Sells a digital asset and returns the proceeds to the account's FIAT balance @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to sell @param digital_asset_fk: The digital_asset_fk to sell @param amount: The amount of the digital_asset_fk to sell
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| digital_asset_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden of behavioral disclosure. It mentions a confirmation step ('Confirm (yes/no) before executing'), which adds some context, but fails to disclose critical behavioral traits such as whether this is a destructive/mutative operation (implied by 'sells' but not stated), authentication needs (hinted by api_key and token parameters but not explained), rate limits, error handling, or what happens on execution. For a financial transaction tool with no annotations, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy: the first sentence is repeated verbatim. It's front-loaded with the core purpose, but the parameter annotations are included inline, which adds length without strong structure. While not overly verbose, it could be more streamlined by removing repetition and better organizing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a financial sell operation with 5 parameters), empty annotations, and an output schema exists (implied by '@return: a json object'), the description is incomplete. It lacks details on behavioral aspects like safety, authentication, and error handling, and while it mentions parameters, it doesn't fully explain them. The output schema might cover return values, but the description doesn't provide enough context for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate by explaining parameters. It lists parameters with brief annotations (e.g., '@param api_key: The api key allocated to your application'), which adds some semantics beyond the schema. However, it doesn't fully clarify the meaning or usage of all parameters (e.g., what 'account_fk' or 'digital_asset_fk' represent in practice, format of 'amount'), leaving gaps. With 5 parameters and low schema coverage, this is inadequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Sells a digital asset and returns the proceeds to the account's FIAT balance.' It specifies the verb ('sells'), resource ('digital asset'), and outcome ('returns proceeds to FIAT balance'), which is clear and specific. However, it doesn't explicitly differentiate from sibling tools like 'ai__account_buy' or 'ai__account_swap', which would be needed for a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline: 'Confirm (yes/no) before executing,' which implies a confirmation step is required. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'ai__account_buy' or 'ai__account_swap', nor does it mention prerequisites or exclusions. This leaves usage context partially implied but incomplete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_sendAInspect
Sends FIAT from this account to another Netfluid account. Optionally save as a beneficiary. Confirm (yes/no) before executing
Sends FIAT from this account to an account address. Optionally save as a beneficiary. The FIAT sent is converted to the destination account's currency. This function can send FIAT across wallets @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to send @param destination: The destination's internal account address (not it's crypto address, not it's account id) @param amount: The amount to send @param note: The note on the transaction @param save: Optionally saves destination as beneficiary for future use. Boolean value of 0=False, 1=True @param name: Sets the beneficiary descriptive name. Only applicable if save is set to 1
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| note | No | ||
| save | No | ||
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| destination | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses key behavioral traits: it's a send operation (implies mutation), includes currency conversion ('The FIAT sent is converted to the destination account's currency'), and cross-wallet capability ('This function can send FIAT across wallets'). However, it doesn't cover critical aspects like authentication needs (though params hint at it), rate limits, error conditions, or irreversible effects. With no annotations, this is a moderate disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is verbose and repetitive, with redundant sentences like 'Sends FIAT from this account to another Netfluid account. Optionally save as a beneficiary. Confirm (yes/no) before executing' followed by a similar line. The param explanations are thorough but could be more streamlined. It's front-loaded with purpose but wastes space on repetition, reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (8 parameters, no annotations, but has output schema), the description is mostly complete. It explains the tool's purpose, parameters, and key behaviors like currency conversion. The output schema exists, so return values don't need explanation. However, it lacks details on error handling, security implications, or integration with sibling tools, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides detailed parameter semantics for all 8 parameters, explaining each one's purpose (e.g., 'api_key: The api key allocated to your application', 'destination: The destination's internal account address (not it's crypto address, not it's account id)'). This adds significant meaning beyond the bare schema, fully documenting the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Sends FIAT from this account to another Netfluid account' and 'Sends FIAT from this account to an account address.' It specifies the resource (FIAT) and action (send), though it repeats this information. It doesn't explicitly differentiate from sibling tools like 'ai__account_wire' or 'ai__account_pay', which might have similar functions, so it doesn't reach a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes some usage guidance: 'Optionally save as a beneficiary' and 'Confirm (yes/no) before executing,' which implies procedural steps. However, it doesn't specify when to use this tool over alternatives like 'ai__account_wire' or 'ai__account_pay' from the sibling list, nor does it mention prerequisites or exclusions. This provides implied context but lacks explicit alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_send_smsAInspect
Sends a Mobile SMS and charges this account. Confirm (yes/no) before executing
Sends a SMS and charges this account. This end point will deliver to limited destinations: South Africa, Botswana, Zimbabwe, UK. Delivery is not guaranteed. Charged per submission. @param api_key: The api key allocated to your application, must have admin privileges @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list @param mobile: The destination mobile number in e164 international format (no +) @param message: The message to send, charged per 160 characters
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| mobile | Yes | ||
| api_key | Yes | ||
| message | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a transactional tool that charges the account, has geographic limitations (South Africa, Botswana, Zimbabwe, UK), non-guaranteed delivery, and per-submission/per-character charging. This covers critical operational aspects, though it could mention error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat repetitive ('Sends a Mobile SMS...' appears twice) and could be more streamlined. However, it's front-loaded with key information and uses a structured @param/@return format that aids readability. Some sentences could be combined for better flow, but overall it's adequately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's transactional nature (sending SMS with financial charges), no annotations, and 0% schema coverage, the description does a good job covering purpose, parameters, behavioral traits, and output format (@return). The presence of an output schema reduces the need to detail return values. It could be more complete by mentioning error cases or authentication requirements beyond the parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed semantic explanations for all 5 parameters. Each @param annotation clearly explains the purpose, format requirements (e.g., 'e164 international format (no +)', 'charged per 160 characters'), and source context (e.g., 'provided by /access/login'), adding significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'sends a Mobile SMS and charges this account' with specific verb (send) and resource (SMS), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'ai__account_send' or 'ai__text_message', which might have overlapping functionality, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some context with 'Confirm (yes/no) before executing' and lists destination countries and charging details, which implies usage scenarios. However, it lacks explicit guidance on when to use this tool versus alternatives like 'ai__account_send' or 'ai__text_message', and doesn't specify prerequisites or exclusions beyond the listed destinations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_statementBInspect
Returns detailed account FIAT statement.
Returns detailed account statement from a start date for a maximum of entries. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list @param max_rows: The maximum number of entries to return, if not provided we return 1000 @param start_date: The date from which to return the statement, format YYYY-MM-DD HH:MM:SS. If not provided then it returns 60 days worth of data, date is in the GMT timezone
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| max_rows | Yes | ||
| account_fk | Yes | ||
| start_date | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool as a read operation ('Returns'), which implies it's non-destructive. It provides useful context about default behaviors (1000 entries if max_rows not provided, 60 days of data if start_date not provided, GMT timezone). However, it doesn't mention authentication requirements beyond the parameters, rate limits, error conditions, or what happens with invalid dates/parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise with two purpose statements followed by parameter documentation. However, there's some redundancy (the purpose is stated twice in slightly different ways) and the structure could be improved by separating the high-level description from parameter details more clearly. The @param/@return format is helpful but could be more efficiently organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 required parameters with 0% schema description coverage, the description does an excellent job of documenting parameter semantics. The presence of an output schema means the description doesn't need to explain return values. However, for a financial data retrieval tool, the description could provide more context about what constitutes a 'detailed account statement' - whether it includes transactions, balances, fees, etc.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must fully compensate for the lack of parameter documentation in the schema. It successfully documents all 5 parameters with clear explanations of their purpose, sources, defaults, and formats. The @param annotations provide specific details about where to obtain values (e.g., 'provided by /access/login', 'provided by /wallet/accounts_list') and format requirements ('YYYY-MM-DD HH:MM:SS'). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns detailed account FIAT statement' and 'Returns detailed account statement from a start date for a maximum of entries.' It specifies the resource (account statement) and scope (FIAT, date range, entry limit). However, it doesn't explicitly differentiate from sibling tools like 'ai__account' or 'ai__account_info' which might provide different account-related information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools like 'ai__account', 'ai__account_info', and 'ai__accounts', there's no indication of what makes this tool unique or when it should be preferred over other account-related tools. The description only explains what the tool does, not when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_swapCInspect
Swaps one digital asset for another on an account level. Confirm (yes/no) before executing
Swaps one digital asset for another on an account level, use digital_asset_fk=0 or to_digital_asset_fk=0 to indicate FIAT @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to swap @param digital_asset_fk: The origin digital_asset_fk to swap @param to_digital_asset_fk: The digital_asset_fk to swap @param amount: The amount of the digital_asset_fk to swap @param note: The transaction note
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| note | No | ||
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| digital_asset_fk | Yes | ||
| to_digital_asset_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It hints at a confirmation step, implying user interaction or safety checks, but fails to detail critical aspects like whether this is a destructive/write operation (implied by 'swaps'), rate limits, error handling, or what the 'json object' return entails. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: it repeats the opening sentence unnecessarily and mixes usage notes with parameter documentation in a somewhat cluttered format. While it avoids excessive verbosity, the repetition and lack of clear separation between purpose and details reduce its effectiveness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, no annotations, but with an output schema), the description is partially complete. It covers the basic purpose and parameters but misses critical context like authentication flow (linking to 'ai__access_login' for the 'token'), error cases, and sibling tool differentiation. The output schema existence means return values are documented elsewhere, but overall completeness is limited.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all 7 parameters with brief explanations (e.g., 'api_key: The api key allocated to your application'), adding basic semantics beyond the schema. However, it lacks details on parameter formats, constraints (e.g., valid ranges for 'amount'), or the meaning of 'fk' suffixes, resulting in partial but incomplete parameter guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'swaps' and the resource 'digital asset for another on an account level', making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'ai__crypto_swap' or 'ai__account_sell', which appear related to asset exchanges, leaving room for ambiguity in tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance, only mentioning a confirmation step ('Confirm (yes/no) before executing') and a note about using 'digital_asset_fk=0' for FIAT. It lacks explicit when-to-use scenarios, prerequisites (e.g., authentication context from sibling tools like 'ai__access_login'), or alternatives among the many sibling tools, offering insufficient direction for optimal tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_typesCInspect
Lists all account types, typically static information
Lists all account types @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It mentions 'typically static information' which hints at read-only behavior and infrequent changes, but doesn't explicitly state if it's safe, requires authentication, has rate limits, or what happens on errors. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with redundant lines ('Lists all account types' appears twice) and mixes purpose with parameter documentation in an informal format. It's not front-loaded effectively, and the repetition wastes space without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which covers return values), 1 parameter with 0% schema coverage (partially compensated in description), and no annotations, the description is minimally adequate. It states the purpose and documents the key parameter, but lacks behavioral details and usage context that would be helpful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description documents one parameter (@param api_key: The api key allocated to your application) which adds meaning beyond the schema's type-only definition. However, it doesn't explain format, sourcing, or security implications. With 1 parameter and partial documentation, this meets the baseline for minimal viability.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Lists all account types, typically static information' which provides a clear verb ('Lists') and resource ('account types'), but it's somewhat vague about what 'account types' specifically refers to in this context. The repetition of 'Lists all account types' adds no value and slightly reduces clarity. It distinguishes from many siblings (e.g., ai__account, ai__accounts) but not from similar list tools like ai__currency_types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There's no mention of prerequisites (e.g., authentication context), comparison to other account-related tools (e.g., ai__account, ai__accounts), or typical use cases. The description only states what it does, not when it should be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_unlockAInspect
Unlocks a locked account (from status_fk=10 to status_fk=1). Confirm (yes/no) before executing
Unlocks a locked account. @param api_key: The api key allocated to your application, must have admin privileges @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It implies a state-changing operation ('Unlocks'), which suggests mutation, but doesn't disclose permissions beyond admin privileges for api_key, rate limits, or side effects. The confirmation hint adds some behavioral context, but lacks details on what 'unlocking' entails operationally.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat repetitive ('Unlocks a locked account' appears twice) and includes param annotations in the main text, which could be structured better. However, it's front-loaded with the core purpose and confirmation requirement, with param details following logically.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations, 0% schema coverage, but with output schema, the description does well: it explains the purpose, provides usage guidance, and documents all parameters thoroughly. The output schema handles return values, so the description focuses appropriately on inputs and behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides detailed semantics for all 3 parameters: api_key requires 'admin privileges', token comes from '/access/login', and account_fk is 'provided by /wallet/accounts_list'. This adds significant value beyond the bare schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Unlocks') and resource ('a locked account'), with specific status transition details ('from status_fk=10 to status_fk=1'). It distinguishes from siblings like ai__account_pause or ai__account_resume by focusing on unlocking, but doesn't explicitly contrast with all account status tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Confirm (yes/no) before executing' indicates a prerequisite confirmation step. It doesn't explicitly state when not to use or name alternatives among siblings, but the confirmation requirement offers practical guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__account_wireBInspect
Sends FIAT or Digital Assets from this account to a blockchain address. FIAT is converted by the system. Confirm (yes/no) before executing
Sends FIAT or Digital Assets from this account to a blockchain address. The FIAT account must be associated to the same blockchain as the destination address @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to wire @param destination: The destination blockchain address @param amount: The amount to wire @param digital_asset_fk: The digital_asset_fk to wire, use 0 for FIAT @param note: The note on the transaction
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| note | No | ||
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| destination | Yes | ||
| digital_asset_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context: the warning to confirm before executing implies a destructive or irreversible action, and it notes system conversion for FIAT and blockchain association requirements. However, it doesn't cover other critical traits like authentication needs (implied by api_key and token but not explained), rate limits, error handling, or what 'json object' return entails. This leaves gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy: the first sentence is repeated verbatim. It front-loads the core purpose but includes param listings that could be more integrated. The structure is functional but not optimized, with some wasted space from repetition and a loose organization of notes and parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation with 7 params, no annotations, but an output schema exists), the description is partially complete. It covers the basic operation and parameters but misses key contextual details: no sibling tool differentiation, incomplete behavioral traits, and minimal param semantics. The output schema mitigates the need to explain return values, but overall, it's adequate only for minimal understanding with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully for parameter documentation. It lists all 7 parameters with brief explanations (e.g., 'digital_asset_fk: The digital_asset_fk to wire, use 0 for FIAT'), which adds meaning beyond the bare schema. However, the explanations are minimal and lack details on formats, constraints, or examples (e.g., what 'account_fk' represents, valid ranges for 'amount'). Given the low schema coverage, this is insufficient to fully guide usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Sends FIAT or Digital Assets from this account to a blockchain address.' It specifies the verb ('Sends'), resource ('FIAT or Digital Assets'), and destination ('blockchain address'), which is specific and actionable. However, it doesn't explicitly differentiate from sibling tools like 'ai__account_send' or 'ai__withdraw', which may have overlapping functionality, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage guidance: it mentions that 'FIAT is converted by the system' and includes a prerequisite that 'The FIAT account must be associated to the same blockchain as the destination address.' It also has a warning 'Confirm (yes/no) before executing,' which hints at caution. However, it lacks explicit when-to-use vs. alternatives (e.g., compared to 'ai__account_send' or 'ai__withdraw'), and no exclusions are stated, making it only moderately helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__ACHAInspect
Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return type ('a json object containing the schema'), which adds some context about output format. However, it doesn't cover other behavioral aspects like whether it's read-only (implied by 'returns'), performance characteristics, error handling, or authentication needs. The description adds basic value but leaves gaps for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise, consisting of two sentences that cover the purpose and output. It's front-loaded with the main function. However, the second sentence is somewhat redundant with the first and could be more streamlined (e.g., by integrating the return info into the initial statement), and there's minor grammatical awkwardness ('This tools provides').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity is low (0 parameters, output schema exists), the description is fairly complete. It explains what the tool does and hints at the output structure. With an output schema present, it doesn't need to detail return values extensively. However, for a reference tool in a large set of siblings, it could benefit from more explicit differentiation or usage context to fully guide an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it doesn't contradict the schema. Since there are no parameters, the baseline is 4, as the description appropriately doesn't waste space on non-existent inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts.' It specifies the verb ('returns') and resource ('list of tools'), making the function clear. However, it doesn't explicitly differentiate from sibling tools like 'ai__bridges', 'ai__off_ramps', or 'ai__on_ramps', which appear to be more specific implementations rather than reference tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating it 'provides reference information in the "referenced_tools" schema,' suggesting it should be used for looking up related tools. However, it lacks explicit guidance on when to use this tool versus alternatives like the specific bridge/ramp tools listed as siblings, or prerequisites such as needing certain permissions or contexts. The implication is there but not clearly articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__ask_geminiCInspect
Performs a Gemini AI prompt.
@param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param prompt: The string prompt
@return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| prompt | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It mentions parameters like api_key and token, implying authentication needs, but doesn't disclose behavioral traits such as rate limits, response formats beyond 'a json object', error handling, or whether it's read-only or mutative. The description is minimal and lacks crucial operational details needed for safe and effective use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose, followed by param explanations. However, it includes redundant information (e.g., repeating '@param' and '@return' tags that might be structured elsewhere) and lacks efficient structuring. Sentences are minimal but could be more polished; it's concise but not optimally structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 parameters, no annotations, 0% schema coverage, but has output schema), the description is partially complete. It explains parameters but misses behavioral context like authentication flow, error cases, or usage scenarios. The output schema exists, so return values don't need explanation, but overall, it's adequate with clear gaps in operational guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining each parameter: api_key is 'allocated to your application', token and wallet_fk are 'provided by /access/login', and prompt is 'the string prompt'. This clarifies the source and purpose of parameters beyond their types, though it doesn't detail formats or constraints. With 0% schema coverage, this is good but not exhaustive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Performs a Gemini AI prompt' which indicates the tool sends a prompt to Gemini AI. This is a clear verb+resource combination, but it's somewhat vague about what 'performs' entails (e.g., is it a simple query, a complex interaction, or something else?). It doesn't distinguish from sibling tools like 'ai__about' or 'ai__help_ping' which might also involve AI interactions, so it lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools available (e.g., 'ai__about', 'ai__help_ping'), there's no indication of whether this is for general AI queries, specific Gemini interactions, or other contexts. It mentions parameters like api_key and token but doesn't explain prerequisites or when this tool is appropriate compared to others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__automated_agent_signupAInspect
Automated signup for autonomous agents in order for an Agentic AI to perform autonomous agents payments. In order for this wallet to be autonomous, it needs to be created by a human KYCed wallet. Once done this wallet will be fully enabled to perform any transaction a human KYCed wallet is capable of. Including access to virtual bank accounts in Europe and the USA as a funding source. The impact of this is that the agent will have its own wallet, own bank accounts and blockchain wallets without limitations. The wallet is automatically minted with a Solana Blockchain wallet. The human accepts all legal responsibility for this wallet. Process can take up to 30 seconds.
Automated signup for autonomous agents in order for an Agentic AI to perform autonomous agents payments. In order for this wallet to be autonomous, it needs to be created by a human KYCed wallet. Once done this wallet will be fully enabled to perform any transaction a human KYCed wallet is capable of. Including access to virtual bank accounts in Europe and the USA as a funding source. The impact of this is that the agent will have its own wallet, own bank accounts and blockchain wallets without limitations. The wallet is automatically minted with a Solana Blockchain wallet. The human accepts all legal responsibility for this wallet. Process can take up to 30 seconds. @param secret: A secret, minimum 8 characters, must be unique system-wide. This is typically the username, but private. Suggestion: Generate 3 natural language words, concatenated, or prompt the human for 3 words. @param pin: The 5 digit PIN associated with the new wallet. Can be any random 5 numbers. @param email: The agents email address, if not available use a blank string. @param mobile: The human's mobile phone number, in e164 format, e.g. 27821234567 (no +). if not available use a blank string. @param currency_fk: The currency of the first account, use 7 (ZAR) for everyone else use 3 (USD) @param sponsor_wallet_fk: The sponsor's wallet_fk, this wallet_fk must be KYCed if not the new wallet will also not be KYCed @param sponsor_wallet_pin: The sponsor's 5-digit PIN
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| pin | Yes | ||
| Yes | |||
| mobile | Yes | ||
| secret | Yes | ||
| currency_fk | No | ||
| sponsor_wallet_fk | Yes | ||
| sponsor_wallet_pin | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses that the process creates a Solana wallet, grants access to virtual bank accounts, involves legal responsibility transfer to a human, and takes up to 30 seconds. It also implies this is a write/mutation operation (signup). However, it doesn't mention error conditions, rate limits, or authentication requirements beyond the parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and repetitive: the first 8 lines are duplicated verbatim, wasting space. It's front-loaded with purpose but includes redundant sentences. The parameter documentation is thorough but could be more efficiently organized. Overall, it lacks conciseness due to duplication and could be streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, mutation operation, financial/legal implications) and no annotations, the description does a good job: it explains purpose, prerequisites, outcomes, timing, and detailed parameter semantics. With an output schema present ('@return: a json object'), it doesn't need to explain return values. However, it could better address error handling or security considerations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides detailed semantic explanations for all 7 parameters: format requirements (e.g., 'e164 format'), examples ('27821234567'), defaults ('use 7 (ZAR)'), usage guidance ('if not available use a blank string'), and suggestions ('Generate 3 natural language words'). This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Automated signup for autonomous agents' to create a wallet that enables autonomous payments. It specifies the resource (wallet) and outcome (autonomous transactions with bank/blockchain access). However, it doesn't explicitly distinguish this from sibling tools like 'ai__signup' or 'ai__automated_signup' (which appears to be a duplicate name), missing full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool: for setting up autonomous agent wallets, with prerequisites (human KYCed wallet as sponsor). It mentions the outcome ('fully enabled to perform any transaction') but doesn't explicitly state when NOT to use it or name alternatives among the many sibling tools (e.g., vs. regular 'ai__signup'), leaving usage context somewhat implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__automated_signupAInspect
Automated signup for new customers. AI should use this tool as the preferred method to signup human customers.
Automated signup for new customers. Process can take up to 30 seconds. Once completed direct the human customer to the kyc url for them to complete the identity verification process. Should the human need more tries at identity verification, call wallet_kyc_session_create @param secret: A secret, minimum 8 characters, must be unique system-wide. This is typically the customer username, but private. Suggestion: Generate 3 natural language words, concatenated, or prompt the human for 3 words, something that the human can remember. @param pin: The 5 digit PIN associated with the wallet. Can be any random 5 numbers, but perhaps use something that is meaningful to the human or prompt them for it. @param email: The customers email address. @param mobile: The customers mobile phone number, in e164 format, e.g. 27821234567 (no +) @param currency_fk: The currency of the first account, if the human is South African, use 7 (ZAR) for everyone else use 3 (USD)
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| pin | Yes | ||
| Yes | |||
| mobile | Yes | ||
| secret | Yes | ||
| currency_fk | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the empty annotations. It discloses that the 'Process can take up to 30 seconds' (performance expectation) and instructs to 'direct the human customer to the kyc url for them to complete the identity verification process' (post-signup workflow). However, it doesn't mention error handling, authentication requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat repetitive ('Automated signup for new customers' appears twice) and could be more efficiently structured. However, it's appropriately sized for a complex signup tool with multiple parameters and workflow instructions. The parameter explanations are well-organized but the initial paragraph could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, signup workflow), the description provides good context: purpose, usage guidelines, behavioral expectations, and detailed parameter semantics. With an output schema present, it doesn't need to explain return values. The main gap is lack of error handling or edge case information, but overall it's quite complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed semantic information for all 5 parameters. Each parameter includes practical guidance: 'secret' explains uniqueness requirements and suggests generation methods, 'pin' suggests meaningful numbers, 'mobile' specifies e164 format, and 'currency_fk' provides logic for South African vs. other users. This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Automated signup for new customers' with the specific action of signing up human customers. It distinguishes itself from sibling tools like 'ai__signup' by being the 'preferred method' for automated signups, indicating a specialized use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'AI should use this tool as the preferred method to signup human customers.' It also specifies when to use an alternative tool: 'Should the human need more tries at identity verification, call wallet_kyc_session_create.' This gives clear direction on when to use this tool versus others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__beneficiariesBInspect
Provides reference information on the term "beneficiary" or "beneficiaries"
Provides reference information on the term "beneficiary" or "beneficiaries" This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates the tool provides reference information and returns a JSON object, which suggests a read-only, non-destructive operation. However, it doesn't detail aspects like response structure, error handling, or any constraints (e.g., rate limits or authentication needs), leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive, with the first two sentences being nearly identical ('Provides reference information...'), which adds no value. The third sentence about the 'referenced_tools' schema and return type is useful but could be integrated more efficiently. Overall, it's somewhat concise but suffers from redundancy and could be better structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no annotations) and the presence of an output schema (which handles return values), the description is minimally adequate. It explains the tool's purpose and output format, but lacks details on usage context or behavioral traits, making it incomplete for optimal agent guidance despite the structured support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't add any parameter-specific information, which is appropriate here. Since there are no parameters, the baseline score is 4, as the description doesn't need to compensate for missing schema details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'provides reference information on the term "beneficiary" or "beneficiaries"', which clarifies its purpose as an informational lookup tool. However, it doesn't differentiate itself from similar reference tools like 'ai__about' or 'ai__terms', nor does it specify the scope or format of the reference information beyond mentioning it's in the 'referenced_tools' schema, making it somewhat vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any specific contexts, prerequisites, or exclusions, such as when to choose this over 'ai__beneficiaries_list' or other informational tools. This lack of usage context leaves the agent without clear direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__beneficiaries_listCInspect
Lists all beneficiaries on this wallet or on this account
Lists all beneficiaries on this wallet or on this account @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login, may not be 0 @param account_fk: The account_fk for this account, may be set to 0, in which case all beneficiaries associated with this wallet are returned
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It states the tool lists beneficiaries, implying a read-only operation, but does not disclose behavioral traits such as authentication requirements (beyond parameter descriptions), rate limits, pagination, or error handling. The mention of '@return: a json object' is minimal and does not elaborate on response structure or potential outcomes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured, with redundant repetition of the first sentence and parameter details embedded in a comment-like format. It is not front-loaded effectively, and the inclusion of '@return: a json object' adds little value given the presence of an output schema. The text could be more streamlined and organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a 4-parameter tool with no annotations, the description provides basic purpose and parameter semantics but lacks behavioral context. The output schema exists, so explaining return values is unnecessary, but gaps remain in usage guidelines and transparency. It is minimally adequate but has clear room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides detailed semantics for all four parameters (api_key, token, wallet_fk, account_fk), explaining their purposes, sources (e.g., '/access/login'), and special cases (e.g., account_fk=0 returns all wallet beneficiaries). This adds significant value beyond the bare schema, though it could be more structured.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Lists all beneficiaries on this wallet or on this account', which provides a clear verb ('Lists') and resource ('beneficiaries'). However, it does not distinguish this from sibling tools like 'ai__beneficiaries' or 'ai__beneficiary_add', making the purpose somewhat vague in context. The repetition of the same sentence adds no clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes no explicit guidance on when to use this tool versus alternatives. It mentions that 'account_fk' can be set to 0 to list all wallet beneficiaries, but this is parameter-specific and does not provide broader usage context or comparisons to sibling tools like 'ai__beneficiaries' or 'ai__beneficiary_add'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__beneficiary_addAInspect
Creates a beneficiary on this wallet or on this account
Creates a beneficiary on this wallet or on this account @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login, may not be 0 @param account_fk: The account_fk for this account, may be set to 0, in which case this beneficiary will be available to this wallet @param name: The descriptive name of this beneficiary, try to keep name short and unique per wallet @param address: The address of the beneficiary, either the Netfluid Account Address or the Blockchain address @param rba_fk: The rba_fk of the recipient bank account, set to 0 if either name or address is provided @param note: The note or recipient reference on the transaction, visible to the receiver.
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| note | Yes | ||
| token | Yes | ||
| rba_fk | Yes | ||
| address | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It indicates this is a creation/mutation operation ('Creates'), implying it modifies data, but doesn't disclose behavioral traits like authentication requirements (beyond parameter hints), rate limits, side effects, or error handling. The description adds minimal context beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose, but it repeats the first sentence unnecessarily ('Creates a beneficiary on this wallet or on this account' appears twice). The parameter explanations are thorough but could be more streamlined. Overall, it's adequately structured but not optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (8 required parameters, 0% schema coverage, no annotations), the description does a good job explaining parameters and the basic action. However, it lacks behavioral context (e.g., permissions, errors) and doesn't leverage the output schema (which exists but isn't referenced). For a mutation tool with no annotations, it's reasonably complete but could be enhanced.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantic explanations for all 8 parameters, including their purposes, sources (e.g., 'provided by /access/login'), constraints (e.g., 'may not be 0'), and relationships (e.g., 'set to 0 if either name or address is provided'). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Creates a beneficiary') and the target ('on this wallet or on this account'), which is specific and unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'ai__beneficiaries' or 'ai__beneficiary_remove', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing authentication via other tools first) or compare it to related sibling tools like 'ai__beneficiaries_list' or 'ai__beneficiary_remove', leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__beneficiary_removeCInspect
Deletes a beneficiary
Creates a beneficiary on this wallet or on this account @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account on which the beneficiary is set (may be 0) @param beneficiary_id: The beneficiary_id provided by /beneficiary/list (uuid)
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| beneficiary_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must fully disclose behavioral traits. It states the tool deletes a beneficiary (implying a destructive operation) but doesn't mention permissions required, whether deletion is reversible, or any side effects. The contradictory second sentence about creating beneficiaries adds confusion rather than clarity. No rate limits, authentication details beyond parameters, or error handling are described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and not front-loaded. The first sentence states deletion, but the second contradicts it with creation, wasting space and confusing the reader. The param annotations are useful but could be integrated more smoothly. Overall, it lacks efficiency and clarity in presentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage and no annotations, the description partially compensates with param semantics but fails to provide a clear, consistent purpose or behavioral context. The output schema exists (implied by '@return: a json object'), so return values needn't be detailed, but the core functionality is muddled by the contradiction, leaving gaps in understanding the tool's role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides param annotations for all four parameters (api_key, token, account_fk, beneficiary_id) with brief explanations of their purposes and sources (e.g., 'provided by /access/login' for token). This adds meaningful context beyond the bare schema, though some details like format expectations or example values are missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Deletes a beneficiary' which clearly states the action and resource, but then immediately contradicts itself with 'Creates a beneficiary on this wallet or on this account' in the next sentence. This creates confusion about whether the tool deletes or creates beneficiaries, making the purpose unclear and somewhat misleading despite having a specific verb+resource initially.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There are sibling tools like 'ai__beneficiaries', 'ai__beneficiaries_list', and 'ai__beneficiary_add' that likely handle listing and adding beneficiaries, but the description doesn't mention these or specify when deletion is appropriate versus other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridge_blockchainAInspect
Creates a blockchain to blockchain bridge. Confirm (yes/no) before executing
Creates a blockchain to blockchain bridge. Only supports USDC transfers. Do not send any other tokens as these may be lost. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param account_fk: The account_fk that will use the bridge @param blockchain: The blockchain that will receive the transfer. Possible values "ethereum","solana","avalanche_c_chain" @param address: The address on the above blockchain that will receive the transfer @param alias: A descriptive name for this Bridge, can be anything, set by async default if none is provided. @param currency: The currency, possible values are usdc, usdt, eurc
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| alias | No | ||
| token | Yes | ||
| address | Yes | ||
| api_key | Yes | ||
| currency | No | ||
| wallet_fk | Yes | ||
| account_fk | Yes | ||
| blockchain | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the critical behavioral trait that sending non-USDC tokens 'may be lost' (destructive risk) and mentions the confirmation requirement. However, it doesn't cover other important behaviors like whether this is an irreversible operation, what permissions are needed, rate limits, or what happens after bridge creation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has structural issues. The first sentence is repeated verbatim, creating redundancy. The parameter documentation is well-structured with @param tags, but the overall flow could be improved by eliminating the repetition and better integrating the warnings with the main description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (blockchain bridge creation with 8 parameters), no annotations, and an output schema present (so return values don't need description), the description does a good job. It covers the core purpose, critical warnings, parameter meanings, and confirmation requirement. The main gap is lack of behavioral details about the bridge creation process itself.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides valuable parameter semantics beyond the bare schema. It explains what each parameter represents (e.g., 'api_key: The api key allocated to your application'), lists possible values for 'blockchain' and 'currency', and clarifies that 'alias' is optional with async default. This significantly compensates for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'creates a blockchain to blockchain bridge' with the specific purpose of enabling USDC transfers between blockchains. It distinguishes from siblings like 'bridge_list' or 'bridge_delete' by focusing on creation, but doesn't explicitly contrast with other bridge-related tools like 'bridge_on_ramp' or 'bridge_off_ramp_ach_wire'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance to 'Confirm (yes/no) before executing' and warns 'Only supports USDC transfers. Do not send any other tokens as these may be lost.' This gives clear when-to-use context (USDC transfers only) and risk warnings, though it doesn't specify alternatives for non-USDC transfers or compare with other bridge tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridge_deleteCInspect
Deactivates the Bridge. Confirm (yes/no) before executing
Deactivates the Bridge @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param external_account_id: The bridge's unique reference, found on /bridge/list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| external_account_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It states 'Deactivates the Bridge' which implies a destructive/mutative operation, but doesn't clarify if this is reversible, what permissions are needed, or any rate limits. The confirmation requirement is helpful but insufficient for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has structural issues - the first line repeats 'Deactivates the Bridge' unnecessarily. The parameter documentation is helpful but could be better integrated. Overall efficient but with some redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations, 4 parameters at 0% schema coverage, but with output schema present, the description covers parameters well but lacks behavioral context. The confirmation guidance helps, but more details about the mutation's effects would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 4 parameters, the description compensates well by documenting all parameters with source information (e.g., 'found on /bridge/list' for external_account_id). This adds significant value beyond the bare schema, though some parameter purposes could be clearer.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Deactivates the Bridge' which provides a clear verb+resource, but it's repetitive and doesn't differentiate from sibling tools like 'ai__bridge_list' or 'ai__bridge_rename'. The purpose is understandable but lacks specificity about what 'Bridge' refers to in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes 'Confirm (yes/no) before executing' which provides some procedural guidance, but it doesn't explain when to use this tool versus alternatives or what prerequisites exist. No explicit when/when-not instructions or sibling tool comparisons are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridge_listCInspect
Returns bridges based on the transfer_type, a bridge is either an "on-ramp" (also called a "virtual account" and a funding source), an "off-ramp" (USDC transfer to a fiat bank account) or a "blockchain" (USDC transfer between blockchains). Bridges are only available on blockchain accounts
Returns bridges based on the transfer_type, a bridge is either an on-ramp, an off-ramp or a blockchain. Bridge on-ramps are also called "virtual accounts" @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param account_fk: The account_fk that will use the bridge @param transfer_type: The type of bridge, possible values are "off-ramp" for USDC and USDt transfers to SEPA/ACH bank accounts, "blockchain" for USDC cross blockchain transfers, "on-ramp" for FIAT deposits into virtual accounts (that instantly arrive as USDC)
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| account_fk | Yes | ||
| transfer_type | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what a bridge is and the transfer_type parameter, but doesn't reveal whether this is a read-only operation, what authentication levels are needed, potential rate limits, error conditions, or pagination behavior. The description adds some context about bridge types but misses key behavioral traits for a tool with 5 required parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive and poorly structured. It repeats 'Returns bridges based on the transfer_type' and the bridge definitions, and includes redundant @param/@return annotations that belong in schema documentation. The content could be condensed to 2-3 clear sentences without losing meaning, making it inefficient rather than concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no annotations, but has output schema), the description is partially complete. It explains the core concept of bridges and transfer_type values well, and the output schema existence means return values don't need description. However, it lacks sufficient parameter explanations and behavioral context, making it adequate but with clear gaps for proper tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides detailed explanations for transfer_type (defining possible values and their meanings) and mentions account_fk context, but doesn't explain api_key, token, or wallet_fk parameters beyond their names. With 5 parameters and only 1-2 adequately described, the description doesn't sufficiently compensate for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns bridges based on the transfer_type' with specific definitions of what a bridge is (on-ramp, off-ramp, blockchain). It distinguishes the tool's function from siblings like ai__bridge_on_ramp or ai__bridge_off_ramp_sepa by focusing on listing bridges by type rather than creating or managing them. However, it doesn't explicitly differentiate from ai__bridges (which might list all bridges without filtering).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'Bridges are only available on blockchain accounts' and listing transfer_type values, suggesting when to use this tool for specific bridge types. However, it lacks explicit guidance on when to choose this over sibling tools like ai__bridges or ai__bridge_on_ramp, and doesn't mention prerequisites or exclusions beyond the account_fk requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridge_off_ramp_ach_wireBInspect
Creates a blockchain to FIAT off-ramp in the USA supporting both ACH/WIRE bank networks. Confirm (yes/no) before executing
Creates a blockchain to FIAT off-ramp in the USA support both ACH/WIRE bank networks. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param account_fk: The account_fk that will use the bridge @param account_owner: The owner of the recipient bank account @param account_number: The recipient bank account number @param routing_number: The recipient bank account routing number (us) or sort code (uk). @param address_line: The address of the recipient, street and must include a number @param address_city: The address of the recipient, city @param address_state: The address of the recipient, state @param address_zipcode: The address of the recipient, zip or postal code @param address_iso3_country: The address of the recipient, ISO3 country code @param alias: A descriptive name for this Bridge, can be anything, set by async default if none is provided. @param destination_rail: The destination rail to use, possible values are "ach_same_day" and "wire", async default is "ach_same_day" @param currency: The crypto currency to expect, possible values are "usdc","usdt", async defaults to "usdc" @param account_type: The recipient bank account type, possible values are "checking" or "savings", async defaults to "checking". Only applicable when destination_rail = "wire"
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| alias | No | ||
| token | Yes | ||
| api_key | Yes | ||
| currency | No | ||
| wallet_fk | Yes | ||
| account_fk | Yes | ||
| account_type | No | ||
| address_city | Yes | ||
| address_line | Yes | ||
| account_owner | Yes | ||
| address_state | Yes | ||
| account_number | Yes | ||
| routing_number | Yes | ||
| address_zipcode | Yes | ||
| destination_rail | No | ||
| address_iso3_country | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the confirmation requirement, which is useful, but lacks critical details: it doesn't specify whether this is a destructive/write operation (implied by 'creates'), what permissions or authentication levels are needed beyond the listed parameters, rate limits, or what happens on failure. The description adds minimal behavioral context beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and redundant: the first two sentences are nearly identical, and the parameter documentation is lengthy but necessary due to 0% schema coverage. It's not front-loaded effectively, and the repetition wastes space, reducing overall efficiency despite the essential parameter details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (16 parameters, 0% schema coverage, no annotations) and the presence of an output schema (indicated by @return), the description is mostly complete. It thoroughly documents all parameters and their semantics, which is critical. However, it lacks behavioral details like error handling or side effects, and the output schema's existence means return values don't need explanation, but more context on the tool's operation would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It provides detailed parameter semantics for all 16 parameters, including descriptions, possible values, defaults, and applicability conditions (e.g., 'Only applicable when destination_rail = "wire"'). This adds significant meaning beyond the bare schema, fully documenting the inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Creates a blockchain to FIAT off-ramp in the USA supporting both ACH/WIRE bank networks.' It specifies the action (creates), resource (off-ramp), and geographic scope (USA). However, it doesn't explicitly differentiate from sibling tools like 'bridge_off_ramp_sepa' or 'bridge_on_ramp', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline: 'Confirm (yes/no) before executing,' which provides some context about when to use it (after confirmation). However, it doesn't specify when to use this tool versus alternatives like 'bridge_off_ramp_sepa' for SEPA networks or 'withdraw_to_bank' for withdrawals, leaving gaps in sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridge_off_ramp_sepaBInspect
Creates a blockchain (USDC) to FIAT off-ramp in Europe for SEPA supporting bank accounts. Confirm (yes/no) before executing
Creates a blockchain to FIAT off-ramp in Europe for SEPA supporting bank accounts. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param account_fk: The account_fk that will use the bridge @param account_owner: The owner of the recipient bank account @param iban: The recipient bank account IBAN @param iso3_country: The recipient bank account iso3 country code, examples BEL for Belgium @param iban_bic: The recipient bank account IBAN BIC. @param entity_type: The type of entity, possible values are "individual" or "business" @param address_line: The address of the recipient, street and must include a number @param address_city: The address of the recipient, city @param address_state: The address of the recipient, state @param address_zipcode: The address of the recipient, zip or postal code @param address_iso3_country: The address of the recipient, ISO3 country code @param alias: A descriptive name for this Bridge, can be anything, set by async default if none is provided. @param first_name: If type of entity is and "individual", provide the recipients first name @param last_name: If type of entity is an "individual", provide the recipients last name @param business_name: If type of entity is a "business", provide the business name, if not provided we will use the account_owner @param reference: The payment reference, displayed on the recipients bank statement.
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| iban | Yes | ||
| alias | No | ||
| token | Yes | ||
| api_key | Yes | ||
| iban_bic | Yes | ||
| last_name | No | ||
| reference | No | ||
| wallet_fk | Yes | ||
| account_fk | Yes | ||
| first_name | No | ||
| entity_type | Yes | ||
| address_city | Yes | ||
| address_line | Yes | ||
| iso3_country | Yes | ||
| account_owner | Yes | ||
| address_state | Yes | ||
| business_name | No | ||
| address_zipcode | Yes | ||
| address_iso3_country | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions a confirmation step ('Confirm (yes/no) before executing'), which hints at a safety measure, but doesn't disclose critical behavioral traits: whether this is a mutating operation (likely yes, given 'creates'), what permissions are needed, potential side effects, rate limits, or what the return JSON contains. For a creation tool with 19 parameters and no annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and a confirmation note, but it's verbose due to the extensive parameter documentation. While the parameter details are necessary given the schema gap, the overall structure could be improved by separating usage guidelines from parameter semantics more clearly. Some repetition exists (the purpose is stated twice). It's appropriately sized for the complexity but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (19 parameters, creation operation), no annotations, and an output schema (true), the description is partially complete. It excels in parameter semantics but lacks behavioral transparency, usage guidelines, and sibling differentiation. The output schema handles return values, so that's covered. However, for a tool that likely involves financial transactions and mutations, more context on safety, errors, and operational constraints is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides detailed parameter documentation for all 19 parameters, including purpose, examples (e.g., 'examples BEL for Belgium'), conditional logic (e.g., 'If type of entity is and "individual", provide...'), and defaults (e.g., 'set by async default if none is provided'). This adds substantial meaning beyond the bare schema, fully addressing the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Creates a blockchain (USDC) to FIAT off-ramp in Europe for SEPA supporting bank accounts.' It specifies the action (creates), resource (off-ramp), and geographic/technical scope (Europe, SEPA, USDC to FIAT). However, it doesn't explicitly differentiate from sibling tools like 'ai__bridge_off_ramp_ach_wire' or 'ai__bridge_on_ramp', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage note: 'Confirm (yes/no) before executing,' which provides some procedural guidance. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., other off-ramp tools like 'ai__bridge_off_ramp_ach_wire'), prerequisites beyond the parameters, or typical scenarios. The guidance is minimal and doesn't address key decision points for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridge_on_rampAInspect
Creates a virtual account (SEPA/ACH/WIRE) to blockchain bridge. Confirm (yes/no) before executing
Creates a virtual account (SEPA/ACH/WIRE) to blockchain bridge. Limited to one virtual account per payment rail (SEPA/ACH/WIRE). @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param account_fk: The account_fk that will use the bridge @param blockchain: The blockchain that will receive the transfer. Possible values "ethereum","solana","avalanche_c_chain" @param address: The address on the above blockchain that will receive the transfer @param alias: A descriptive name for this Bridge, can be anything, set by async default if none is provided. @param currency: The destination currency, possible values are "usdc", "usdt" @param source_rail: The source rail, possible values are "sepa", "wire" or "ach_push"
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| alias | No | ||
| token | Yes | ||
| address | Yes | ||
| api_key | Yes | ||
| currency | No | ||
| wallet_fk | Yes | ||
| account_fk | Yes | ||
| blockchain | Yes | ||
| source_rail | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the creation action (implies mutation), the 'one per rail' limitation, and the confirmation requirement. However, it doesn't mention permissions needed, rate limits, idempotency, or what happens if a rail already has a bridge. The description adds some behavioral context but leaves significant gaps for a financial tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues. The first two sentences are redundant ('Creates a virtual account...' appears twice). The parameter documentation is thorough but presented in a dense block. While informative, it could be more efficiently structured with better separation between the tool purpose and parameter details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, financial bridge creation), no annotations, but with output schema present, the description does reasonably well. It explains the tool's purpose, constraints, and all parameters in detail. The main gaps are lack of explicit differentiation from sibling tools and incomplete behavioral context (security, error handling). The output schema handles return values, so that's appropriately omitted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides detailed parameter documentation: all 9 parameters are listed with clear explanations, including possible values for 'blockchain', 'currency', and 'source_rail'. The descriptions clarify what each parameter represents (e.g., 'wallet_fk: The wallet_fk provided by /access/login') and default behaviors for optional parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'creates a virtual account to blockchain bridge' with specific payment rails (SEPA/ACH/WIRE). It distinguishes from siblings like 'bridge_list' or 'bridge_delete' by focusing on creation. However, it doesn't explicitly differentiate from other bridge-related tools like 'bridge_blockchain' or 'bridge_off_ramp' tools in terms of direction (on-ramp vs off-ramp).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Confirm (yes/no) before executing' and 'Limited to one virtual account per payment rail'. It implies this is for setting up inbound payment channels. However, it doesn't explicitly state when to use this versus alternatives like 'bridge_off_ramp' tools or 'virtual_account' tools, nor does it mention prerequisites beyond the parameters listed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridge_renameBInspect
Changes the Bridge's descriptive name. Confirm (yes/no) before executing
Changes the Bridge's descriptive name. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param external_account_id: The bridge's unique reference, found on /bridge/list @param alias: The new descriptive name for this Bridge
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| alias | Yes | ||
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| external_account_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must fully disclose behavioral traits. It mentions a confirmation step, which adds some context, but fails to describe other critical behaviors: it does not state whether this is a destructive operation (likely yes, as it changes data), authentication requirements beyond parameters, rate limits, error handling, or what the 'json object' return entails. The description is insufficient for a mutation tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and repetitive: it starts with 'Changes the Bridge's descriptive name. Confirm (yes/no) before executing', then repeats 'Changes the Bridge's descriptive name.' unnecessarily. The parameter explanations are listed but could be more integrated. It is front-loaded with the purpose but wastes space on redundancy, reducing overall efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a mutation tool with 5 parameters, 0% schema coverage, no annotations, but an output schema exists, the description is partially complete. It explains parameters well and states the return is a 'json object', but lacks behavioral details (e.g., side effects, error cases) and does not leverage the output schema to clarify return values. It meets minimal needs but has clear gaps in safety and usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantic explanations for all 5 parameters (e.g., 'api_key: The api key allocated to your application', 'external_account_id: The bridge's unique reference, found on /bridge/list'), adding meaningful context beyond the schema's type definitions. This effectively covers the parameters, though some details like format constraints are missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Changes') and resource ('Bridge's descriptive name'), making the purpose evident. However, it does not explicitly differentiate from potential siblings like 'ai__account_alias' or 'ai__bridge_list', which might handle similar naming operations, though the tool name itself suggests Bridge-specific renaming.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline: 'Confirm (yes/no) before executing,' which implies a confirmation step is needed. However, it does not specify when to use this tool versus alternatives (e.g., other Bridge-related tools like 'ai__bridge_list' for reference or 'ai__account_alias' for other aliases), nor does it provide context on prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__bridgesBInspect
Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return type ('a json object containing the schema') and hints at reference data, but lacks details on permissions, rate limits, or side effects. Since annotations are empty, the bar is lower, and the description adds some context (e.g., it's a read operation returning structured data), but it's not comprehensive enough for a higher score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise but could be more front-loaded. The first sentence clearly states the purpose, but the second sentence ('This tools provides reference information...') is somewhat redundant and includes a grammatical error ('tools' instead of 'tool'). The third sentence ('@return: a json object...') adds value but could be integrated more smoothly. Overall, it's adequate but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no annotations, but has an output schema), the description is fairly complete. It explains what the tool returns and hints at the output structure. Since an output schema exists, the description doesn't need to detail return values extensively. However, it could better address usage context relative to siblings, keeping it from a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it doesn't contradict the schema. According to the rules, 0 parameters warrants a baseline of 4, as the description appropriately avoids unnecessary parameter explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts.' It specifies the verb ('returns') and resource ('list of tools'), and identifies the domain (bridges, transfers, virtual accounts). However, it doesn't explicitly differentiate from sibling tools like 'ai__bridge_list' or 'ai__off_ramps', which might serve overlapping purposes, so it doesn't reach a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance on when to use this tool. It mentions that it 'provides reference information in the "referenced_tools" schema,' but doesn't specify when to choose this over alternatives like 'ai__bridge_list' or 'ai__off_ramps' from the sibling list. No explicit when/when-not scenarios or prerequisites are stated, leaving usage context vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__commodityCInspect
Provides tools to retrieve Netfuid commodities This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'retrieve' (implying read-only) and references 'referenced_tools' schema, but doesn't disclose behavioral traits like authentication needs, rate limits, or what 'retrieve' entails. The @return note is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with redundancy ('Provides tools' repeated). The second sentence is confusing and doesn't add clarity. It's brief but not effectively structured or front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Has output schema, so return values are covered. However, for a tool with no annotations and minimal description, it lacks context on what 'commodities' means, how retrieval works, or integration with sibling tools. Incomplete for understanding its role in the system.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter count is 0, so there are no parameters to document. Schema description coverage is 100%, but with no parameters, the description doesn't need to add semantic value. Baseline is 4 as per rules for 0 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Provides tools to retrieve Netfuid commodities' which gives a basic verb+resource, but 'Netfuid' appears to be a typo (likely 'Netfluid' based on sibling tools). It doesn't distinguish from siblings like 'ai__commodity_types' or 'ai__crypto_digitalassets'. The second sentence is confusing and doesn't clarify the primary purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. There are many sibling tools related to commodities, crypto, and assets, but the description provides no comparison or context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__commodity_typesCInspect
Returns all authorised commodities
Returns all authorised commodities @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It states this is a read operation ('returns'), which implies non-destructive behavior, but doesn't disclose any behavioral traits like authentication needs (only mentions api_key as a parameter without context), rate limits, error handling, or what 'authorised' means. The description adds minimal value beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with redundant repetition ('Returns all authorised commodities' appears twice) and includes parameter documentation inline without clear separation. While brief, the repetition and lack of front-loaded clarity reduce effectiveness. Every sentence does not earn its place due to duplication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter, 0% schema coverage, no annotations, but an output schema exists, the description is minimally adequate. It explains the basic purpose and parameter, but lacks context on usage, behavioral details, or differentiation from siblings. The output schema relieves the need to explain return values, but overall completeness is limited.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description explicitly documents the single parameter 'api_key' with '@param api_key: The api key allocated to your application'. This adds meaning beyond the schema by explaining the parameter's purpose. However, it doesn't provide format details or usage context, and with only one parameter, the baseline is 4, but the lack of depth keeps it at 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Returns all authorised commodities', which provides a clear verb ('returns') and resource ('authorised commodities'). However, it doesn't differentiate from sibling tools like 'ai__commodity' or 'ai__currency_types', leaving ambiguity about scope. The repetition of the same sentence adds no clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With sibling tools like 'ai__commodity' and 'ai__currency_types', the agent has no indication of whether this tool is for listing commodity types, retrieving specific commodities, or other purposes. The description lacks any context about prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__complianceBInspect
Provides a link to Netfluid's compliance procedures and documentation This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates the tool is informational ('provides reference information'), which implies it's read-only and non-destructive. However, it doesn't detail any behavioral traits like rate limits, authentication needs, or what happens if compliance docs are unavailable. The @return note adds some context about output format, but overall disclosure is basic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences. The first sentence clearly states the purpose, and the second adds useful context about the output and schema reference. There's no wasted text, though the repetition of 'tools' in 'This tools provides' is a minor grammatical error that doesn't hinder understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, informational purpose, output schema present), the description is reasonably complete. It explains what the tool does and hints at the output format. However, it could be more thorough by clarifying how it differs from similar documentation tools in the sibling list, which would enhance contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100% (though the schema is empty). The description doesn't need to explain parameters, so it meets baseline expectations. It adds value by noting the output is 'a json object', which is helpful since an output schema exists but isn't shown here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Provides a link to Netfluid's compliance procedures and documentation'. It specifies the verb ('provides') and resource ('link to compliance procedures and documentation'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'ai__privacy' or 'ai__terms' that might also provide documentation links.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers minimal usage guidance. It mentions that the tool 'provides reference information in the "referenced_tools" schema', which hints at its informational nature, but doesn't specify when to use it versus alternatives (e.g., other documentation tools like 'ai__privacy' or 'ai__terms'). No explicit when/when-not instructions or prerequisites are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__contactCInspect
How to contact Netfluid's customer support This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. The description mentions it 'provides reference information' which suggests a read-only operation, and the @return note indicates it returns JSON. However, it doesn't specify what kind of contact information is provided (email, phone, hours), whether authentication is needed, or any rate limits. The behavioral information is minimal but not contradictory.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with grammatical errors ('tools' instead of 'tool') and confusing phrasing. The first sentence is a question rather than a statement, and the second sentence introduces confusing terminology about 'referenced_tools' schema. The @return note is tacked on awkwardly. While brief, it's not effectively concise due to poor structure and clarity issues.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description doesn't need to explain parameters or return values. However, for a contact information tool, the description should more clearly state what specific information is provided and in what format. The current description is minimally adequate but leaves important context gaps about what 'customer support' information actually means.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description doesn't need to explain parameters since there are none, and the schema already fully documents the empty input object. No additional parameter information is needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'How to contact Netfluid's customer support' which indicates the tool provides contact information, but it's phrased as a question rather than a clear action statement. The second sentence 'This tools provides reference information in the "referenced_tools" schema' is confusing and doesn't clearly state what the tool actually does. It distinguishes from siblings by focusing on support contact, but the purpose isn't clearly articulated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. There's no mention of when a user would need customer support contact information versus using other support-related tools like 'ai__support' or 'ai__help_ping' from the sibling list. The description doesn't provide any context about appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__cryptoBInspect
Returns a list of tools that work with crypto and stable coins (USDC,USDt) This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a list and provides reference information, which implies it's a read-only operation without side effects. However, it lacks details on rate limits, authentication needs, error handling, or data freshness. The description adds basic context but misses key behavioral traits expected for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but could be more front-loaded. The first sentence clearly states the purpose, but the second sentence ('This tools provides reference information in the "referenced_tools" schema') is somewhat redundant and awkwardly phrased. The third sentence ('@return: a json object containing the schema') is unnecessary given the presence of an output schema. Overall, it's concise but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, output schema exists, no annotations), the description is reasonably complete. It explains what the tool returns and hints at the output structure. However, it could better clarify the tool's role in the broader context of sibling crypto tools. The output schema reduces the need for detailed return value explanations, making the description adequate but not exceptional.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, meaning no parameters need documentation. The description doesn't discuss parameters, which is appropriate here. It earns a baseline score of 4 because the schema fully handles parameter semantics, and the description doesn't need to compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a list of tools that work with crypto and stable coins (USDC,USDt)'. It specifies the verb ('returns'), resource ('list of tools'), and scope ('crypto and stable coins'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'ai__crypto_digitalassets' or 'ai__crypto_info', which might also provide crypto-related information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions that the tool 'provides reference information in the "referenced_tools" schema', but this is a technical detail rather than usage context. There's no indication of prerequisites, timing, or comparison to sibling tools, leaving the agent with minimal practical guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_balanceCInspect
Returns a crypto wallet address balance associated with an account_fk
Returns a crypto wallet address balance associated with an account_fk @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a JSON object, which is helpful, but lacks critical details: it doesn't specify if this is a read-only operation (implied by 'returns' but not explicit), whether it requires authentication beyond the parameters, potential rate limits, error conditions, or what the balance data includes. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy: the first two sentences are identical ('Returns a crypto wallet address balance associated with an account_fk'), which is wasteful. The parameter explanations are useful but could be more tightly integrated. It's front-loaded with the purpose, but the repetition and lack of structural flow (e.g., grouping related info) reduce its efficiency, making it adequate but not exemplary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a balance-fetching tool with authentication parameters), the description is partially complete. It explains the purpose and parameters, and an output schema exists (not detailed here), so it doesn't need to describe return values. However, with no annotations and 0% schema coverage, it misses behavioral aspects like safety, performance, or error handling. For a tool in this context, it provides the basics but lacks depth for confident agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, meaning the input schema provides no descriptions for the three parameters. The description compensates by explaining each parameter: 'api_key' as 'allocated to your application', 'token' as 'provided by /access/login', and 'account_fk' as 'for the account as provided by /wallet/accounts_list'. This adds meaningful context beyond the schema, but since the schema coverage is low, the baseline is lower, and the description does enough to merit a score of 3 by clarifying sources and purposes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a crypto wallet address balance associated with an account_fk'. It specifies the verb ('returns') and resource ('crypto wallet address balance'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'ai__crypto_token_balance' or 'ai__account_info', which might also provide balance-related information, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions parameters like 'account_fk' from '/wallet/accounts_list', but doesn't explain prerequisites, such as needing to call that sibling tool first. There's no mention of when not to use it or what other tools might be better for related tasks, leaving the agent with insufficient context for optimal selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_blockchainsBInspect
List all the system supported blockchains, a list of blockchain_fk values
List all the supported blockchains, each entry returns a blockchain_fk for later use @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It describes a read-only listing operation ('List all'), implying no destructive actions, but doesn't disclose behavioral traits like rate limits, authentication needs beyond api_key, pagination, or error handling. The mention of 'blockchain_fk for later use' adds some context about output utility, but overall behavioral disclosure is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but repetitive, with two nearly identical sentences. It front-loads the purpose but includes redundant phrasing. The param and return annotations are structured but could be integrated more smoothly. Overall, it's adequate but could be tighter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, listing operation), no annotations, and an output schema exists (implied by '@return: a json object'), the description is reasonably complete. It covers purpose, parameter semantics, and output format at a high level. However, it lacks details on usage context or behavioral nuances that could aid the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explicitly documents the single parameter 'api_key' with a brief explanation ('The api key allocated to your application'), adding meaning beyond the schema's type definition. Since there's only one parameter and it's fully described, this is sufficient, though not exemplary.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all the system supported blockchains' and 'List all the supported blockchains'. It specifies the verb ('List') and resource ('blockchains'), and indicates the output includes 'blockchain_fk values for later use'. However, it doesn't explicitly differentiate from sibling tools like 'ai__crypto' or 'ai__crypto_digitalassets', which might also list blockchain-related information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites beyond the api_key parameter, nor does it compare to sibling tools such as 'ai__crypto' or 'ai__crypto_digitalassets' that might overlap in functionality. The agent must infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_dex_swapCInspect
Swaps a token for another token using a Distributed Exchange (DEX) on the blockchain. Confirm (yes/no) before executing
Swaps a token (not a digital asset) for another token using a Distributed Exchange (DEX) on the blockchain. The DEX charges fees in performing the swap Unsupported tokens will not be displayed in balances, a blockchain explorer will be required to retrieve these @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk associated with this blockchain wallet @param from_token: The origin token mint or contract address @param to_token: The destination token mint or contract address @param amount: The amount of the token to swap
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| to_token | Yes | ||
| account_fk | Yes | ||
| from_token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the DEX charges fees and unsupported tokens require a blockchain explorer, adding useful behavioral context. However, it doesn't cover critical aspects like transaction finality, slippage, error conditions, or security implications of a blockchain swap operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: the first two sentences are nearly identical (redundant), and parameter documentation is embedded rather than separated. While most sentences add value, the repetition and mixed structure reduce effectiveness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 6-parameter blockchain swap tool with no annotations, the description provides basic purpose and some behavioral context, but lacks comprehensive guidance on usage, parameter details, error handling, and security considerations. The presence of an output schema helps, but doesn't compensate for the significant gaps in operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all 6 parameters with brief explanations, but provides minimal semantic context about format requirements (e.g., token addresses vs symbols), validation rules, or relationships between parameters like account_fk and token. The explanations are too basic for a complex financial operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Swaps a token for another token') and resource ('using a Distributed Exchange (DEX) on the blockchain'), distinguishing it from non-swap tools. However, it doesn't explicitly differentiate from sibling 'ai__crypto_swap' or 'ai__account_swap', which appear to be similar swap operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance with 'Confirm (yes/no) before executing' and mentions unsupported tokens won't appear in balances, but lacks explicit when-to-use instructions, prerequisites, or alternatives to other swap tools. No comparison with sibling tools like 'ai__crypto_swap' is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_digitalasset_infoCInspect
List the digital asset's information based on a blockchain_fk and digital_asset_fk
List the digital asset's information based on a blockchain_fk and digital_asset_fk @param api_key: The api key allocated to your application @param blockchain_fk: The blockchain_fk on which this digital_asset_fk resides @param digital_asset_fk: The digital_asset_fk for this digital asset
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| blockchain_fk | Yes | ||
| digital_asset_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must cover behavioral traits. It only states the tool lists information and returns a JSON object, lacking details on permissions, rate limits, error handling, or data freshness. This is insufficient for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose but repeats the first sentence unnecessarily. The parameter explanations are clear but could be more streamlined. Overall, it's adequately sized but has some redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, 0% schema coverage, and an output schema (implied by '@return: a json object'), the description covers parameters well but lacks behavioral context. It's minimally viable but incomplete for safe and effective use by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining all three parameters: api_key as 'allocated to your application', blockchain_fk as 'on which this digital_asset_fk resides', and digital_asset_fk as 'for this digital asset'. This adds meaningful context beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'List[s] the digital asset's information' with specific parameters (blockchain_fk and digital_asset_fk), which clarifies the verb and resource. However, it doesn't distinguish this from sibling tools like 'ai__crypto_info' or 'ai__crypto_digitalassets', making the purpose somewhat vague in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions parameters but doesn't specify prerequisites, exclusions, or compare it to sibling tools such as 'ai__crypto_info' or 'ai__crypto_digitalassets', leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_digitalassetsCInspect
List all the supported digital assets, a list of digital_asset_fk also applicable to to_digital_asset_fk
List all the supported digital assets, each entry returns a digital_asset_fk for later use @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It mentions the tool lists assets and returns identifiers, but lacks details on behavioral traits such as rate limits, pagination, error handling, or whether it's a read-only operation (implied by 'list' but not explicit). The description adds some context (e.g., 'for later use') but is insufficient for a tool with no annotation coverage, missing critical operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but repetitive (the first two sentences are nearly identical) and lacks clear structure. It front-loads the purpose but includes redundant information. While not overly verbose, the repetition reduces efficiency, and the inclusion of '@param' and '@return' sections adds some organization but could be better integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter) and the presence of an output schema (which handles return values), the description is somewhat complete. It covers the basic purpose and parameter semantics. However, with no annotations and many sibling tools, it lacks sufficient context on usage, behavior, and differentiation, making it adequate but with clear gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description includes '@param api_key: The api key allocated to your application', which adds meaning beyond the input schema (which has 0% description coverage and only specifies type and requirement). This clarifies the purpose of the single parameter. Since there is only one parameter and the description covers it adequately, the score is high, compensating for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'List[s] all the supported digital assets' and mentions that entries return a 'digital_asset_fk' for later use, which clarifies the verb (list) and resource (digital assets). However, it is vague about what 'digital assets' are (e.g., cryptocurrencies, tokens) and does not differentiate from siblings like 'ai__crypto_digitalasset_info' or 'ai__wallet_assets_list', leaving ambiguity in scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. While it implies usage for obtaining digital asset identifiers, it does not mention prerequisites (e.g., authentication via api_key), context (e.g., for transactions or queries), or exclusions (e.g., not for balance checks). The presence of many sibling tools (e.g., 'ai__crypto_digitalasset_info') without differentiation results in minimal guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_from_keyAInspect
Recovers any blockchain wallet from a private key or mnemonic, display only, no action is taken
Recovers any blockchain wallet from a private key or mnemonic, display only, no action is taken The order of recover is to use the mnemonic first, private key second. One or the other must be provided. No action is taken. The suppo @param api_key: The api key allocated to your application @param blockchain_fk: The blockchain to use for the recovery @param private_key: The private key to recover from, urlencoded @param mnemonic: The key word mnemonic to recover from, keywords separated by a space
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| mnemonic | No | ||
| private_key | No | ||
| blockchain_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It clearly discloses key behavioral traits: 'display only, no action is taken' (indicating read-only safety), the recovery order ('mnemonic first, private key second'), and the requirement that 'One or the other must be provided.' It also hints at output format ('a json object'). However, it doesn't cover potential errors, rate limits, or authentication details beyond the api_key parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive ('Recovers any blockchain wallet...' appears twice) and contains a fragment ('The suppo'). It's front-loaded with the core purpose but wastes space on redundancy. The param annotations are structured but could be integrated more smoothly. Overall, it's somewhat inefficient and lacks polish.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (cryptographic recovery), no annotations, and an output schema exists (so return values are documented elsewhere), the description is fairly complete. It covers purpose, safety ('no action'), input requirements, and order of operations. However, it misses details like error handling or example usage, which would help for a security-sensitive tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all four parameters with brief explanations (e.g., 'api_key: The api key allocated to your application'), adding basic semantics beyond the schema's type definitions. However, it doesn't clarify format details (e.g., what 'urlencoded' means for private_key, what values blockchain_fk accepts), leaving gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('recovers') and resource ('blockchain wallet') with specific inputs ('private key or mnemonic'). It distinguishes from siblings by emphasizing 'display only, no action is taken,' which differentiates it from tools like ai__crypto_spend or ai__crypto_swap that perform transactions. However, it doesn't explicitly contrast with similar recovery tools like ai__access_recover.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'display only, no action is taken' and 'One or the other must be provided,' which suggests this is for read-only wallet inspection. However, it lacks explicit guidance on when to use this versus alternatives like ai__crypto_verify or ai__access_recover, and doesn't mention prerequisites such as needing an api_key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_infoBInspect
Returns a crypto wallet address and balances associated with an account_fk
Returns a crypto wallet address and balances associated with an account_fk @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk for the account as provided by /wallet/accounts_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states this is a read operation ('returns'), implying non-destructive behavior, but doesn't disclose authentication requirements beyond parameter descriptions, rate limits, error conditions, or what specific data is included in the 'crypto wallet address and balances'. The mention of dependencies on other endpoints adds some context but is insufficient for full behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has structural issues: the first line is duplicated, creating redundancy. The parameter explanations are clear but could be better integrated. Overall, it's front-loaded with the purpose, but the repetition and separate @param/@return sections make it slightly less efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), the description's main gaps are in usage guidelines and behavioral transparency. It covers purpose and parameters adequately but lacks context about when to use it, error handling, or detailed behavioral traits. For a crypto info tool with no annotations, this is minimally viable but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides clear semantic explanations for all three parameters: 'api_key' as 'allocated to your application', 'token' as 'provided by /access/login', and 'account_fk' as 'for the account as provided by /wallet/accounts_list'. This adds meaningful context beyond the bare schema types, though it doesn't specify formats or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a crypto wallet address and balances associated with an account_fk'. It specifies the verb ('returns'), resource ('crypto wallet address and balances'), and scope ('associated with an account_fk'). However, it doesn't explicitly differentiate from sibling tools like 'ai__crypto_balance' or 'ai__crypto_token_balance', which might offer similar functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions dependencies on other endpoints ('/access/login' and '/wallet/accounts_list') for obtaining parameters, but doesn't specify scenarios where this tool is appropriate or when other crypto-related tools should be used instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_optinAInspect
Opts the blockchain wallet into a digital asset (token). Confirm (yes/no) before executing
Opts the blockchain wallet into a digital asset. This is typically required to be performed before the blockchain wallet will accept or hold this digital asset. Blockchains either charge a fee or wit @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk associated with this blockchain wallet @param asset_id: The asset_id or contract address of the digital asset (token) on this blockchain can be found from /crypto/digitalassets
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| asset_id | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that 'Blockchains either charge a fee or wit' (likely truncated, implying fees or other costs), adding useful context about potential costs. However, it lacks details on permissions, rate limits, or what 'Confirm (yes/no) before executing' entails operationally, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat front-loaded with the core action, but it includes redundant sentences (e.g., repeating 'Opts the blockchain wallet into a digital asset') and a truncated phrase ('wit'), reducing efficiency. The param annotations are helpful but could be integrated more seamlessly, making the structure adequate but not optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with 4 required params), no annotations, and an output schema present (so return values are covered), the description does a decent job. It explains the purpose, provides param semantics, and hints at behavioral aspects like fees, though it could better address prerequisites or error conditions for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It provides param annotations that explain each parameter's purpose and sources (e.g., 'asset_id' from '/crypto/digitalassets'), adding significant meaning beyond the bare schema. However, it does not cover all semantic details like formats or constraints, slightly limiting completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Opts the blockchain wallet into a digital asset') and resource ('blockchain wallet', 'digital asset'), making the purpose specific. However, it does not explicitly differentiate from its sibling 'ai__crypto_optout', which handles opting out, though the distinction is implied by the tool names.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some context by stating this is 'typically required before the blockchain wallet will accept or hold this digital asset', implying a prerequisite use case. It does not explicitly mention when not to use it or name alternatives, such as 'ai__crypto_optout' for the opposite action, leaving usage partially implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_optoutAInspect
Opts the blockchain wallet out of holding a digital asset. Confirm (yes/no) before executing
Opts the blockchain wallet out of holding a digital asset. Ensure that the blockchain wallet has a zero balance of this asset, before performing this action as value may be lost. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk associated with this blockchain wallet @param asset_id: The asset_id or contract address of the digital asset on this blockchain, can be found from /crypto/digitalassets
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| asset_id | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by warning about potential value loss if the wallet has a non-zero balance, indicating a destructive or irreversible action. This adds crucial context beyond the basic function, though it could further detail error handling or confirmation mechanisms.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized but lacks optimal structure. It repeats the core purpose in the first two lines, which is redundant. The parameter explanations are clear but could be more front-loaded; the warning about zero balance is critical but placed in the middle, slightly reducing efficiency. Overall, it conveys necessary information without excessive fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a destructive operation with 4 parameters), no annotations, 0% schema coverage, but with an output schema indicated, the description is largely complete. It covers purpose, usage warnings, and parameter semantics effectively. The output schema handles return values, so the description need not explain them. Minor gaps include lack of sibling differentiation and more detailed behavioral traits like error responses.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning the input schema provides no parameter descriptions. The description compensates fully by listing each parameter with clear semantics: 'api_key' as 'The api key allocated to your application', 'token' as 'The wallet_api_token provided by /access/login', 'account_fk' as 'The account_fk associated with this blockchain wallet', and 'asset_id' as 'The asset_id or contract address of the digital asset on this blockchain, can be found from /crypto/digitalassets'. This adds essential meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'opts out' and resource 'blockchain wallet' from 'holding a digital asset', making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'ai__crypto_optin', which likely performs the opposite action, leaving room for improvement in sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage with the instruction to 'Confirm (yes/no) before executing' and the warning to 'Ensure that the blockchain wallet has a zero balance of this asset, before performing this action as value may be lost.' This offers explicit guidance on prerequisites and risks, though it does not mention when to use this tool versus alternatives like 'ai__crypto_optin' or other asset management tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_spendAInspect
Spends a digital asset to a destination blockchain address. Confirm (yes/no) before executing
Spends a digital asset to a destination blockchain address @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk of the sender or from account @param asset_id: The asset_id or contract address of the digital asset (token) on this blockchain, asset_id can be found from /crypto/digitalassets look for "address" in the response @param destination: The destination blockchain address (not the internal account address) @param amount: The amount of the digital asset to send, this amount is 10 to the power of the assets decimals, the asset's decimals can be found from /crypto/digitalassets look for "decimals" in the response. @param note: The note on the transaction
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| note | No | ||
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| asset_id | Yes | ||
| account_fk | Yes | ||
| destination | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions a confirmation step ('Confirm (yes/no) before executing'), which hints at caution but doesn't detail behavioral traits like irreversible transactions, permission requirements, rate limits, or error handling. For a financial transaction tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and confirmation guideline, but it becomes verbose with parameter explanations. While informative, the parameter details could be more structured (e.g., bullet points) for better readability. Some redundancy exists (e.g., repeating the purpose line), reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (financial transaction with 7 parameters), no annotations, and an output schema present (implying return values are documented elsewhere), the description is mostly complete. It covers purpose, usage hint, and detailed parameter semantics. However, it lacks behavioral context (e.g., transaction finality, fees), which is a minor gap for such a high-stakes tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides detailed semantics for all 7 parameters, explaining their purposes, sources (e.g., 'asset_id can be found from /crypto/digitalassets'), and special considerations (e.g., amount scaling with decimals). This adds substantial value beyond the bare schema, making parameters well-understood.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Spends a digital asset to a destination blockchain address.' It specifies the verb ('spends') and resource ('digital asset'), and distinguishes it from siblings like 'ai__account_send' by focusing on blockchain transactions. However, it doesn't explicitly differentiate from all potential siblings beyond the basic action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline: 'Confirm (yes/no) before executing,' which implies a safety check. It also references other tools ('/crypto/digitalassets') for parameter values, providing some context. However, it lacks explicit when-to-use vs. alternatives (e.g., compared to 'ai__account_send' or 'ai__crypto_swap'), leaving usage somewhat implied rather than fully guided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_swapAInspect
Swaps a digital asset for another using a Distributed Exchange (DEX) on the blockchain. Confirm (yes/no) before executing
Swaps a digital asset for another using a Distributed Exchange (DEX). The DEX charges fees in performing the swap This tools forces the use of a swap via a blockchain DEX, rather try /account/swap to swap between digital assets @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk associated with this blockchain wallet @param digital_asset_fk: The origin digital_asset_fk, get value from /crypto/digitalassets @param to_digital_asset_fk: The destination digital_asset_fk, get value from /crypto/digitalassets @param amount: The amount of the digital asset to swap
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| digital_asset_fk | Yes | ||
| to_digital_asset_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it requires user confirmation before execution, mentions that fees are charged by the DEX, and specifies that it forces a blockchain DEX swap. However, it doesn't detail potential risks (e.g., slippage, irreversible transactions) or rate limits, leaving some gaps for a financial tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat repetitive (first two sentences are similar) and could be more streamlined. However, it is front-loaded with the core purpose and key guidelines, and the parameter explanations are organized with @param tags. It includes necessary information but could be more concise by eliminating redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial swap with 6 parameters), no annotations, and an output schema present (so return values are documented elsewhere), the description is mostly complete. It covers purpose, usage guidelines, behavioral aspects, and parameter semantics. However, it lacks details on error handling or specific DEX interactions, which could be important for full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides clear semantics for all 6 parameters: explains what each parameter represents (e.g., 'api_key: The api key allocated to your application'), sources for values (e.g., 'get value from /crypto/digitalassets'), and their roles (origin vs. destination assets). This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Swaps a digital asset for another using a Distributed Exchange (DEX) on the blockchain.' It specifies the verb ('swaps'), resource ('digital asset'), and mechanism ('DEX on blockchain'). However, it doesn't explicitly differentiate from sibling tools like 'ai__account_swap' beyond mentioning it as an alternative, which is noted in usage guidelines rather than purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: it instructs to 'Confirm (yes/no) before executing,' mentions that 'This tools forces the use of a swap via a blockchain DEX, rather try /account/swap to swap between digital assets,' and notes 'The DEX charges fees in performing the swap.' This covers when to use (via DEX), when not to (use /account/swap as alternative), and prerequisites (confirmation, fees).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_token_balanceBInspect
Retrieves a token (asset_id) balance on the blockchain, given a asset_id or token or contract address
Retrieves a token (asset_id) balance on the blockchain, given a asset_id or token or contract address This tool will retrieve any token balance from the blockchain, even those not support by Netfluid @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk associated with this blockchain wallet @param asset_id: The asset_id or contract address of the digital asset on this blockchain, can be found from /crypto/digitalassets
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| asset_id | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses it's a retrieval (read-only) operation and mentions it works for tokens not supported by Netfluid, adding useful context. However, it lacks details on rate limits, error conditions, or authentication behavior beyond parameter requirements. It adequately describes core behavior but misses operational traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured: it repeats the first sentence verbatim, wasting space. The param annotations are helpful but could be integrated more smoothly. It's front-loaded with purpose but includes redundant text, reducing efficiency. Sentences don't all earn their place due to repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage and an output schema (implied by @return), the description does well: it explains all parameters and states the return is a JSON object. For a retrieval tool with no annotations, it covers purpose, params, and output adequately, though it could add more on error handling or examples.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides param annotations explaining each parameter's purpose and sources (e.g., asset_id 'can be found from /crypto/digitalassets'), adding significant meaning beyond the bare schema. However, it doesn't detail formats or constraints for parameters like api_key or token, leaving some gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieves a token (asset_id) balance on the blockchain' with specific resources (token/asset_id/contract address). It distinguishes from sibling tools like 'ai__crypto_balance' by specifying token-level retrieval rather than general crypto balances. However, it repeats the same sentence twice, slightly reducing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning it retrieves balances 'even those not support by Netfluid,' suggesting it's for broader token coverage. It doesn't explicitly state when to use this vs. alternatives like 'ai__crypto_balance' or provide prerequisites beyond parameters. Guidelines are present but not fully explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__crypto_verifyCInspect
Verifies a blockchain address as valid for the blockchain
Verifies a blockchain address as valid for the blockchain. Can be any address on that blockchain, if successful it returns a balance of its crypto and digital assets. @param api_key: The api key allocated to your application @param address: The blockchain address @param blockchain_fk: The blockchain_fk of the supported blockchain.
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | ||
| api_key | Yes | ||
| blockchain_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool verifies addresses and returns balance data, but lacks details on error handling, rate limits, authentication requirements beyond the api_key parameter, or what 'valid' means (e.g., format checks vs. on-chain existence). This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: it repeats the first sentence verbatim, wasting space. The @param and @return annotations are clear but could be integrated more smoothly. Overall, it's front-loaded with the purpose but includes redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and 3 parameters with 0% schema coverage, the description is incomplete. It covers the basic purpose and parameters superficially but lacks behavioral details and usage guidelines, making it only minimally adequate for a verification tool with authentication needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description includes @param annotations that name the parameters (api_key, address, blockchain_fk) but add minimal semantics: it only clarifies that address is 'the blockchain address' and blockchain_fk is 'of the supported blockchain', without explaining formats, constraints, or examples. This fails to compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'verifies a blockchain address as valid for the blockchain' and mentions it returns balance information, which clarifies the verb and resource. However, it repeats the same sentence verbatim, and while it distinguishes from some siblings (e.g., ai__crypto_balance focuses on balance only), it doesn't explicitly differentiate from tools like ai__crypto_info or ai__wallet_verify that might overlap in functionality, making it somewhat vague in sibling context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing an API key or valid blockchain_fk, or compare it to siblings like ai__crypto_balance or ai__crypto_info, leaving the agent without clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__currencyBInspect
Provides tools to retrieve currencies at Netfluids rates This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It indicates the tool retrieves data (implying read-only) and returns a JSON object, which is basic behavioral info. However, it lacks details on rate limits, authentication needs, error handling, or what 'Netfluids rates' entail. The mention of 'referenced_tools' schema adds some context but is ambiguous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but poorly structured and front-loaded. The first sentence is clear, but the second is confusing ('This tools provides reference information in the "referenced_tools" schema'), and the third ('@return: a json object') is redundant given the output schema. It could be more concise and better organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema, the description is minimally adequate. However, it lacks clarity on what 'currencies' and 'Netfluids rates' mean, and the 'referenced_tools' schema reference is unexplained. For a tool in a complex financial context with many siblings, more detail would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, but that's acceptable here. A baseline of 4 is appropriate as the schema fully covers the absence of parameters, and the description doesn't need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'retrieves currencies at Netfluids rates' and 'provides reference information', which gives a general purpose. However, it's vague about what specific information is retrieved (e.g., exchange rates, currency lists) and doesn't clearly distinguish it from sibling tools like 'ai__currency_crypto', 'ai__currency_forex', or 'ai__currency_rates', which likely have overlapping or related functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions 'reference information in the "referenced_tools" schema', but this is unclear and doesn't specify use cases, prerequisites, or exclusions. Without explicit or implied context, users lack direction on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__currency_cryptoCInspect
Returns a live price in USD for a based commodity
Returns a live price in USD for a based commodity @param api_key: The api key allocated to your application @param code: The crypto code e.g BTC, ETH, ALGO, HBAR
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | ||
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a live price, implying it's a read-only operation, but doesn't mention rate limits, authentication requirements beyond the api_key, error handling, or what the JSON response contains. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose but repeats the first sentence unnecessarily. The @param and @return sections are structured but could be more integrated. It's reasonably concise but has some redundancy and could be more polished in presentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and low schema coverage (0%), the description does an adequate job by explaining the purpose and parameters. However, it lacks details on behavioral aspects like error cases or API constraints, leaving room for improvement in completeness for a tool that interacts with external data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description explicitly documents both parameters with @param tags, adding meaning beyond the input schema (which has 0% description coverage). It clarifies that 'api_key' is allocated to the application and 'code' is a crypto code like BTC or ETH. This compensates well for the lack of schema descriptions, though it doesn't specify format constraints (e.g., code case-sensitivity).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool returns a live price in USD for a commodity, which clarifies the action and resource. However, it's vague about what 'based commodity' means (cryptocurrency is implied but not explicit), and it doesn't distinguish this tool from potential siblings like 'ai__currency' or 'ai__currency_rates' that might handle similar functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites beyond the parameters, nor does it differentiate from sibling tools like 'ai__crypto' or 'ai__currency_rates' that might offer overlapping functionality. Usage is implied only through the parameter descriptions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__currency_forexCInspect
Returns a live forex price for a commodity in USD
Returns a live forex price for a commodity in USD @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It mentions the tool returns a live price and a JSON object, but lacks critical behavioral details: it doesn't specify if this is a read-only operation, potential rate limits, error handling, or authentication requirements beyond the api_key parameter. For a tool with no annotations, this leaves key behaviors undisclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but has structural issues: it repeats the first sentence verbatim, which is wasteful, and includes a param annotation that could be integrated more smoothly. It's front-loaded with the core purpose, but the repetition and annotation formatting reduce efficiency, making it adequate but not exemplary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no annotations, has output schema), the description is minimally complete. It states the purpose and parameter, and the output schema will handle return values. However, it lacks usage guidelines and behavioral context, which are needed for full agent understanding, keeping it at an average level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description adds a param annotation: '@param api_key: The api key allocated to your application.' This provides meaning for the single parameter, explaining it's an authentication key. However, it doesn't cover format, validation, or sourcing details. With one parameter and some added semantics, it meets the baseline but doesn't fully compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a live forex price for a commodity in USD.' It specifies the action (returns), resource (forex price), and scope (commodity in USD). However, it doesn't differentiate from potential siblings like 'ai__currency_rates' or 'ai__currency_types', which might offer related currency data, so it doesn't reach a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or comparisons to sibling tools (e.g., 'ai__currency_rates'), leaving the agent without context for selection. This is a significant gap in usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__currency_ratesCInspect
Returns a live forex price for a commodity in USD against our rates, XAU and XAG is returned as price in USD per gram. Returns "our_rate", the Netfluid rate, use "our_rate" in all forex conversion
Returns a live forex price for a commodity in USD against our rates @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral insight. It states the tool returns live data and specifies a rate to use ('our_rate'), but doesn't mention authentication requirements beyond the api_key parameter, rate limits, data freshness, error conditions, or whether this is a read-only operation. The description adds some context but leaves critical behavioral aspects undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive (first and third sentences are nearly identical) and poorly structured. It mixes purpose, parameter documentation, and return value information without clear organization. While brief, the repetition and lack of logical flow reduce effectiveness rather than demonstrating conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (live forex data with specific rate handling), no annotations, 0% schema coverage, but with an output schema present, the description is minimally adequate. It covers the basic purpose and one parameter, and the output schema reduces need to explain return values. However, it lacks details on commodity scope, authentication context, and behavioral traits needed for full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but only partially does. It documents the api_key parameter as 'The api key allocated to your application', which adds basic meaning. However, it doesn't mention any other parameters (like commodity identifier) that might be needed, leaving significant gaps in parameter understanding despite the single documented parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool returns live forex prices for commodities in USD against Netfluid rates, with specific mention of XAU and XAG priced per gram. However, it's vague about which commodities are supported beyond the two examples, and it doesn't clearly distinguish this tool from potential siblings like 'ai__currency_forex' or 'ai__commodity' that might handle similar data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions using 'our_rate' for forex conversion, but doesn't explain when this tool is appropriate compared to other currency or commodity-related tools in the sibling list, such as 'ai__currency_forex' or 'ai__commodity'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__currency_typesCInspect
Returns all system currencies
Returns all system currencies @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It states the tool returns data and mentions an API key parameter, but doesn't disclose behavioral traits such as whether it's read-only, has rate limits, authentication requirements beyond the API key, or any side effects. This is inadequate for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive ('Returns all system currencies' appears twice) and includes param/return annotations that are somewhat redundant given the structured fields. It's front-loaded with the purpose, but could be more efficiently structured without duplication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, output schema exists), the description is relatively complete. It states the purpose and parameter, and with an output schema, return values don't need explanation. However, it lacks behavioral context and usage guidelines, which are minor gaps for this straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but there's only one parameter (api_key). The description adds minimal semantics by noting it's 'allocated to your application', which provides some context. However, it doesn't explain format, validation, or usage details, so it partially compensates but not fully.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Returns all system currencies' which clearly indicates the verb (returns) and resource (system currencies). However, it doesn't distinguish this tool from potential siblings like 'ai__currency' or 'ai__currency_types' that might exist in the sibling list, making it somewhat vague about its specific scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention any prerequisites, context for usage, or comparison to sibling tools like 'ai__currency' or 'ai__currency_rates', leaving the agent without clear direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__emailBInspect
Sends email to the wallet owner
Sends email to the wallet owner @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param message: The message body in html, encoded into base64 @param subject: The message subject in plain text (not url or base64 encoded)
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| message | Yes | ||
| subject | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool sends email but does not mention authentication requirements (implied by parameters), rate limits, side effects, or response behavior. This is inadequate for a tool with multiple required parameters and potential security implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose but repeats it unnecessarily. The parameter explanations are structured but verbose, and the return statement is vague ('a json object'). It could be more streamlined without losing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 required parameters, no annotations, but an output schema exists), the description covers parameter semantics well but lacks behavioral context and usage guidelines. The output schema mitigates the need to explain return values, but overall completeness is moderate with notable gaps in transparency and guidelines.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides clear semantics for all 5 parameters, explaining their purposes and formats (e.g., 'message body in html, encoded into base64', 'subject in plain text'). This adds significant value beyond the bare schema, though some details like encoding specifics could be more explicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Sends email') and target ('to the wallet owner'), providing a specific verb and resource. However, it does not differentiate from the sibling tool 'ai__send_email', which appears to serve a similar purpose, leaving some ambiguity about when to use one over the other.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as the sibling 'ai__send_email'. The description only repeats the purpose without indicating context, prerequisites, or exclusions, leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__feesBInspect
Provide the Netfluid fee structure @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It implies a read-only operation ('Provide'), which is consistent with no destructive hints, but doesn't disclose behavioral traits like authentication needs, rate limits, or whether the data is static or dynamic. The @return note adds minimal context about output format, but overall behavioral disclosure is basic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short sentences, front-loading the purpose. The @return note is somewhat redundant given the output schema exists, but it doesn't add significant waste. It could be slightly more structured but remains efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters, 100% schema coverage, and an output schema, the description is minimally adequate. However, it lacks context about what 'Netfluid fee structure' entails (e.g., platform vs. transaction fees) and doesn't leverage the rich sibling list to clarify scope. For a simple lookup tool, it's complete enough but misses opportunities for clarity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add param info, which is appropriate. Baseline for 0 params is 4, as it correctly avoids unnecessary details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Provide[s] the Netfluid fee structure', which gives a clear purpose (verb+resource). However, it doesn't differentiate from sibling tools like 'ai__wallet_fee' or 'ai__account_charge', leaving ambiguity about scope. The purpose is specific enough to understand what information is retrieved but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With many sibling tools related to fees, accounts, and wallets, the description doesn't indicate if this is for general platform fees, specific account types, or other contexts. There's no mention of prerequisites, alternatives, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__fundBInspect
Returns a list of methods by which an account can be funded This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It states this is a read-only reference tool ('Returns a list', 'provides reference information'), which adequately conveys it's non-destructive. However, it lacks details about authentication needs, rate limits, or what 'reference information' specifically entails beyond the return format mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (3 sentences) but contains redundant phrasing ('This tools provides' has a grammatical error) and could be more front-loaded. The second sentence about 'referenced_tools' schema is somewhat technical and could be integrated more smoothly, though it's not excessively verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, no annotations, but an output schema exists, the description is reasonably complete. It explains the purpose (returning funding methods) and the return format (JSON object with schema), which aligns with the output schema handling return values. However, it could better address sibling differentiation and usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it correctly implies no inputs are required by not mentioning any. This meets the baseline for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Returns a list of methods by which an account can be funded', which provides a clear verb ('Returns') and resource ('list of methods'). However, it doesn't distinguish this tool from similar-sounding siblings like 'fund_banks', 'fund_card_quote', or 'fund_payshap', leaving the specific scope ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools related to funding (e.g., 'fund_banks', 'on_ramps'), there's no indication of whether this is a general reference tool, a prerequisite for other funding methods, or how it differs from other funding-related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__fund_banksCInspect
Returns a list of local and international bank accounts.
Returns a list of local and international bank accounts. Do not display blank fields. Customer MUST provide the reference as the beneficiary reference. Payments received, minus bank charges, are au @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk provided by /wallet/accounts_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses that the tool returns a list and includes behavioral notes: 'Do not display blank fields' and 'Customer *MUST* provide the reference as the beneficiary reference.' However, it lacks details on permissions, rate limits, error handling, or what 'Payments received, minus bank charges, are au' implies (the text is truncated). The description adds some context but is incomplete for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured: it repeats the first sentence, includes a truncated and unclear phrase ('Payments received, minus bank charges, are au'), and mixes purpose, constraints, and parameter documentation without clear separation. While it's not overly verbose, the repetition and lack of organization reduce clarity, making it less effective than a well-structured description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (so return values are documented elsewhere), no annotations, and 0% schema coverage, the description provides basic purpose and parameter semantics but lacks comprehensive behavioral context. It covers the 'what' and 'how' for parameters but misses details on usage scenarios, error cases, and the truncated text creates ambiguity. It's minimally adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description compensates by documenting all three parameters with semantic context: 'api_key: The api key allocated to your application,' 'token: The wallet_api_token provided by /access/login,' and 'account_fk: The account_fk provided by /wallet/accounts_list.' This adds clear meaning beyond the bare schema, though it could be more detailed (e.g., format or constraints).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Returns a list of local and international bank accounts,' which provides a clear verb ('returns') and resource ('bank accounts'). However, it doesn't distinguish this tool from potential siblings like 'ai__beneficiaries_list' or 'ai__wallet_accounts_list,' which might also list financial accounts. The purpose is clear but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a note about not displaying blank fields and requiring a beneficiary reference, but these are usage constraints rather than guidance on when to use this tool versus alternatives. There's no explicit mention of when to use this tool over other list tools (e.g., 'ai__wallet_accounts_list'), nor any prerequisites or exclusions beyond the parameters. This leaves the agent without clear direction on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__fund_card_quoteAInspect
Returns a quote for a VISA/Mastercard card charge.
STEP 1 in charging a tokenised Visa/Mastercard. Returns a quote for a VISA/Mastercard card charge. Cards are charged in ZAR. A fee is levied for each charge. This end point only performs a quote, no cards are charged. The quote is valid for 10 minu @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk @param account_fk: The account_fk provided by /wallet/accounts_list @param amount: The amount in currency of the account_fk. All cards are charged in ZAR but a conversion of this amount to ZAR will be automatically performed. On top of this amount a fee with be charged equal to 3.5% for fees and insurance plus South African VAT of 15% (only on the fee portion).
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context: it's a read-only operation (no cards charged), has a validity period (10 minutes), involves currency conversion (ZAR), and includes fee details (3.5% + VAT). However, it doesn't cover potential errors, rate limits, or authentication requirements beyond the parameters, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has some redundancy (e.g., repeating 'Returns a quote for a VISA/Mastercard card charge') and is front-loaded with key info. However, it includes param annotations in a comment-like format that disrupts flow, and the truncated 'minu' suggests incomplete editing, reducing structural clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (financial transaction quoting), no annotations, and an output schema exists (so return values needn't be explained), the description is fairly complete. It covers purpose, usage context, behavioral traits, and parameter semantics. However, it lacks details on error handling or example outputs, which could enhance completeness for this sensitive operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides detailed semantics for all 5 parameters: explains 'api_key' as allocated to the application, 'token' as from /access/login, 'wallet_fk' and 'account_fk' with sources, and 'amount' with conversion and fee details. This adds significant meaning beyond the bare schema, though it could be more structured.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'returns a quote for a VISA/Mastercard card charge' and specifies it's 'STEP 1 in charging a tokenised Visa/Mastercard,' which provides a specific verb ('returns a quote') and resource ('VISA/Mastercard card charge'). However, it doesn't explicitly distinguish this from sibling tools like 'ai__fund_card_recharge' or 'ai__account_charge,' which might be related charging operations, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating 'This end point only performs a quote, no cards are charged' and 'The quote is valid for 10 minu[tes],' which helps guide when to use it (for quoting before actual charging). It implies usage as a preliminary step but doesn't explicitly name alternatives or specify when not to use it, such as compared to actual charging tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__fund_card_rechargeAInspect
Charges a tokenised Visa/Mastercard. Confirm (yes/no) before executing
STEP 2 Charges a tokenised Visa/Mastercard, use /card/quote before this end point The card must have been previously charged using /fund/card_3D_secure_complete @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk @param account_fk: The account_fk provided by /wallet/accounts_list @param wallet_card_id: The wallet_card_id associated with this wallet, can be found with wallet/card_list @param amount: The amount in currency of the account_fk. All cards are charged in ZAR but a conversion of this amount to ZAR will be automatically performed. On top of this amount a fee with be charged equal to 3.5% for fees and insurance plus South African VAT of 15% (only on the fee portion). @param note: The note on the transaction
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| note | No | ||
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| account_fk | Yes | ||
| wallet_card_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses important behavioral traits: the financial transaction nature (charging a card), currency conversion details (ZAR with automatic conversion), fee structure (3.5% plus 15% VAT on fees), and prerequisite steps. It doesn't mention rate limits, error conditions, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat disorganized with redundant phrasing ('STEP 2' appears to be a formatting artifact) and could be more streamlined. However, each sentence adds value: the purpose statement, confirmation requirement, prerequisites, and parameter explanations. The structure could be improved by grouping related information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial transaction with 7 parameters), no annotations, and the existence of an output schema, the description does a good job. It covers purpose, prerequisites, parameter semantics, and key behavioral details. The output schema existence means the description doesn't need to explain return values, which is appropriate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed semantic explanations for all 7 parameters. Each @param line adds meaningful context beyond what the bare schema provides, explaining sources, formats, and implications (especially for 'amount' with its fee details).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Charges a tokenised Visa/Mastercard.' This is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'ai__account_charge' or 'ai__fund_card_quote' beyond mentioning them as prerequisites.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Confirm (yes/no) before executing,' 'use /card/quote before this end point,' and 'The card must have been previously charged using /fund/card_3D_secure_complete.' It clearly states prerequisites and execution confirmation requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__fund_ottAInspect
Funds an account with an OTT Voucher. Confirm (yes/no) before executing
Funds an account with an OTT Voucher. Only available in South Africa and can only be redeemed against any currency account, the system will perform the currency conversion @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk provided by /wallet/accounts_list @param pin: The OTT voucher PIN @param mobile: The South Africa mobile number associated with this customer
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| pin | Yes | ||
| token | Yes | ||
| mobile | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: geographic restriction (South Africa only), currency conversion handling, and a confirmation step. However, it lacks details on permissions, rate limits, error handling, or what the 'json object' return entails, which are important for a financial transaction tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but has some redundancy (the first sentence is repeated). The parameter explanations are clear but could be more tightly integrated. Overall, it's adequately concise but not optimally structured, with room to eliminate repetition and improve flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a financial funding tool with 5 parameters, 0% schema coverage, no annotations, but an output schema, the description does a solid job. It covers purpose, usage context, and parameter semantics. The output schema handles return values, so the description doesn't need to explain them. It could improve by adding more behavioral details like error cases or security notes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics for all 5 parameters, explaining their sources and purposes (e.g., 'api_key: allocated to your application', 'account_fk: provided by /wallet/accounts_list'). This goes beyond the bare schema, though it could provide more detail on formats or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Funds an account') and resource ('with an OTT Voucher'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'ai__fund' or 'ai__fund_payshap', which might also handle funding operations, leaving room for improvement in sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use the tool: 'Only available in South Africa and can only be redeemed against any currency account'. It also includes a prerequisite action: 'Confirm (yes/no) before executing'. However, it doesn't explicitly mention alternatives or when not to use it compared to other funding tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__fund_payatCInspect
Returns a payment reference for use with Pay@, Pay@ provides point of sale integration to all major retailers in South Africa and Botswana. The customer is issued with a unique Pay@ bill payment code, which he needs to present to the cashier at the retailer
Returns a payment reference for use with Pay@ . Customer uses this payment reference to fund via Pay@ online (South Africa) or at Pay@ supporting retailers in South Africa and Botswana. All payments @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk provided by /wallet/accounts_list @param amount: The amount requested, excluding pay@ merchant fees @param reference: The payment reference, can be any text
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | No | ||
| api_key | Yes | ||
| reference | No | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It describes the tool's purpose and return value but lacks critical behavioral details: it doesn't mention whether this is a read-only or mutating operation, what authentication or rate limits apply, what happens if the payment fails, or any side effects. The statement 'Returns a payment reference' suggests it might be a read operation, but without annotations, this is insufficient for a mutation tool context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: it repeats 'Returns a payment reference for use with Pay@' verbatim, includes fragmented sentences ('All payments'), and mixes tool description with parameter documentation in a somewhat disorganized way. While it's not excessively verbose, the repetition and lack of clear structure reduce its effectiveness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage, no annotations, but an output schema exists, the description provides basic purpose and parameter hints but lacks completeness. It doesn't explain behavioral aspects like error conditions, security requirements, or integration details with Pay@. The output schema existence means return values are documented elsewhere, but the description should still cover usage context more thoroughly for this payment integration tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all 5 parameters with brief explanations, adding some meaning beyond the bare schema. However, the explanations are minimal (e.g., 'can be any text' for reference) and don't cover format constraints, units (e.g., currency for amount), or how parameters interact. This partially compensates but leaves significant gaps given the 0% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'returns a payment reference for use with Pay@' and explains that this reference is used for funding via Pay@ online or at retailers. It specifies the geographic scope (South Africa and Botswana) and mentions the payment method integration. However, it doesn't explicitly differentiate from sibling tools like 'fund_payshap' or other funding methods, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'Customer uses this payment reference to fund via Pay@' and mentioning geographic availability. However, it doesn't provide explicit guidance on when to choose this tool over alternatives like 'fund_payshap' or other funding methods in the sibling list, nor does it mention any prerequisites or exclusions beyond the required parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__fund_payshapCInspect
Funds an account with a PayShap payment, PayShap is low-cost, instant bank-to-bank payments method only available from South African banks
Funds an account with a PayShap payment. Returns bank details for a PayShap payment. Only available in South Africa and can only be redeemed against a ZAR currency account. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk provided by /wallet/accounts_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that PayShap is 'low-cost, instant bank-to-bank payments,' which adds useful context about cost and speed. However, it lacks details on permissions required (e.g., authentication needs beyond the parameters), potential side effects (e.g., whether this initiates a transaction or just generates details), rate limits, or error handling. For a financial tool with no annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues. It repeats 'Funds an account with a PayShap payment' unnecessarily, and the information is somewhat scattered across sentences without clear front-loading of key details. While it avoids excessive verbosity, the repetition and lack of optimal organization reduce its effectiveness, though it remains readable overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial transaction with geographic restrictions), no annotations, 0% schema coverage, but with an output schema present (which handles return values), the description is partially complete. It covers the basic purpose, some constraints, and parameter names, but lacks details on behavioral aspects, error cases, and deeper parameter semantics. The output schema mitigates the need to explain return values, but other gaps remain, making it adequate but with clear room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, meaning parameters are undocumented in the schema. The description lists the three parameters with brief explanations (e.g., 'The api key allocated to your application'), but these are minimal and don't fully compensate for the lack of schema documentation. For example, it doesn't specify format constraints, sources for 'account_fk,' or how 'token' relates to authentication. With three required parameters and no schema support, the description adds some value but falls short of providing comprehensive semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Funds an account with a PayShap payment' and 'Returns bank details for a PayShap payment.' It specifies the resource (account) and action (funding via PayShap). However, it doesn't explicitly distinguish this from sibling tools like 'fund_banks' or 'fund_card_recharge,' which might offer alternative funding methods, leaving some ambiguity about when to choose this specific option.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by stating 'Only available in South Africa and can only be redeemed against a ZAR currency account,' which implies geographic and currency restrictions. However, it doesn't explicitly guide when to use this tool versus alternatives (e.g., compared to other 'fund_' siblings like 'fund_banks' or 'fund_card_recharge'), nor does it mention prerequisites or exclusions beyond the location and currency constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__help_pingCInspect
Pings the API
Pings the API and returns a response object @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. The description only states that it 'pings the API and returns a response object' - this reveals nothing about whether this is a read-only operation, whether it has side effects, rate limits, authentication requirements beyond the api_key, or what the response object contains. For a tool with zero annotation coverage, this is insufficient behavioral information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but inefficiently structured. It repeats 'Pings the API' across two lines, uses inconsistent formatting with the @param and @return annotations, and includes unnecessary redundancy. While short, it's not well-organized or front-loaded with the most critical information. The @return statement adds little value since there's an output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this tool has no annotations, 0% schema description coverage, but does have an output schema, the description is incomplete. It doesn't explain what 'pinging' means operationally, why this tool exists among 100+ sibling tools, what the response indicates, or any error conditions. The presence of an output schema reduces the need to describe return values, but the description still fails to provide adequate context for understanding the tool's purpose and behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description includes '@param api_key: The api key allocated to your application', which adds meaning beyond the input schema (which has 0% description coverage and only shows api_key as a required string). However, with only 1 parameter total, the baseline expectation is higher. The param documentation is minimal but does clarify the purpose of the api_key parameter, earning a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Pings the API and returns a response object', which is a tautology that essentially restates the tool name 'ai__help_ping'. While it mentions the action (pinging) and resource (API), it doesn't provide any meaningful differentiation from what the name already implies. It doesn't explain what 'pinging' means in this context or what value this provides compared to other tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There are absolutely no usage guidelines provided. The description doesn't indicate when to use this tool versus alternatives, what purpose it serves in the broader context, or any prerequisites beyond the required api_key parameter. Given the many sibling tools (over 100), this lack of guidance is particularly problematic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__netfluid_voucher_checkAInspect
Performs a validation on a Netfluid voucher code. The voucher is not redeemed, only validated @param voucher_code: The Netfluid voucher code, format is 4 sets of integers, e.g. 1234-4321-1234-4321
@return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| voucher_code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that this is a validation-only operation (not redemption), which is useful behavioral context. However, it doesn't mention authentication requirements, rate limits, error conditions, or what happens with invalid codes. The description adds some value but leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with three sentences that each add value: purpose statement, behavioral clarification, and parameter details. It's front-loaded with the core purpose. The @param and @return annotations are slightly redundant with structured fields but provide quick reference. Minor room for improvement in flow prevents a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter), no annotations, and the presence of an output schema (implied by '@return: a json object'), the description is reasonably complete. It covers purpose, behavioral distinction from redemption, and parameter format. However, it could better address error handling or link to sibling tools for a more comprehensive context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It provides the parameter name 'voucher_code' and specifies the format: '4 sets of integers, e.g. 1234-4321-1234-4321.' This adds meaningful semantic information beyond the schema's basic type declaration. However, it doesn't mention validation rules beyond format, such as length or character restrictions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Performs a validation on a Netfluid voucher code. The voucher is not redeemed, only validated.' It specifies the verb (validation), resource (Netfluid voucher code), and distinguishes it from redemption. However, it doesn't explicitly differentiate from sibling tools like 'ai__account_merchant_voucher_redeem' or 'ai__wallet_voucher_list', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'The voucher is not redeemed, only validated,' suggesting this tool is for checking validity before redemption. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'ai__account_merchant_voucher_redeem' or other voucher-related tools, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__off_rampsBInspect
Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read-only operation ('Returns a list'), which is helpful, but lacks details on permissions, rate limits, or error handling. The mention of 'reference information' and the return format adds some context, but it's minimal for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences, but it's somewhat redundant (e.g., repeating 'tools' and mentioning the return format in a separate sentence). It could be more front-loaded by combining ideas, but it avoids excessive verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no annotations, but has an output schema), the description is reasonably complete. It explains the purpose and return format, and with an output schema present, it doesn't need to detail return values. However, it could benefit from more behavioral context given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate here, but it could have clarified the empty input object if needed. Baseline is high due to no parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts,' which provides a clear verb ('Returns') and resource ('list of tools'). However, it doesn't distinguish itself from sibling tools like 'ai__on_ramps' or 'ai__bridges' that might handle similar concepts, making the purpose somewhat vague in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives. It mentions what the tool does but doesn't specify scenarios, prerequisites, or exclusions, leaving the agent without context for selection among the many sibling tools listed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__on_rampsBInspect
Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool returns a list and provides reference information, which implies it's a read-only operation. However, it doesn't disclose behavioral traits like rate limits, authentication needs, or potential side effects. The mention of '@return' hints at output format but is vague. With no annotations, this is a moderate disclosure but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences but includes redundant phrasing: 'This tools provides reference information in the "referenced_tools" schema' could be integrated more smoothly. The '@return' note is awkwardly placed and doesn't add clarity. While not verbose, the structure is slightly disjointed, reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is reasonably complete. It explains what the tool does (returns a list of tools) and hints at the output format. However, it could better clarify the relationship to sibling tools or the nature of the reference information, leaving minor gaps in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't mention any parameters, which is appropriate since none exist. It adds no semantic details beyond the schema, but with zero parameters, the baseline is high as there's nothing to compensate for.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts.' It specifies the verb ('Returns') and resource ('list of tools'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'ai__off_ramps' or 'ai__bridges', which might also list related resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions that the tool 'provides reference information,' but doesn't specify contexts, prerequisites, or exclusions. Given the many sibling tools related to bridges and accounts, this lack of differentiation leaves the agent without clear usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__privacyCInspect
Provides Netfluid's privacy policy This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool 'provides' information but doesn't clarify if this is a read-only operation, whether it requires authentication, or what format the information comes in. The confusing second sentence about 'referenced_tools' schema adds noise rather than useful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with two sentences that don't logically connect. The first sentence is clear but the second adds confusion with unexplained 'referenced_tools' schema reference. It's not appropriately front-loaded and wastes space on irrelevant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is minimally adequate. However, for an informational tool among many similar siblings, it should better explain its specific role and when to use it. The confusing second sentence reduces rather than enhances completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description doesn't need to explain parameters, and it correctly doesn't attempt to describe any. The confusing mention of 'referenced_tools' schema is irrelevant but doesn't detract from the parameter clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Provides Netfluid's privacy policy' which is a clear purpose, but it's somewhat vague about what 'provides' means (displays, retrieves, explains?). It doesn't distinguish from sibling tools like 'ai__terms' or 'ai__about' that likely serve similar informational functions. The second sentence adds confusion rather than clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. With many sibling tools including 'ai__terms' and 'ai__about', there's no indication of when privacy policy information is needed versus terms of service or general about information. The description offers no context for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__pushCInspect
Sends a push text message to customer. Push messages are limited to 160 characters per messageSends a push text message to customer. Push messages are limited to 160 characters per message This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It mentions the 160-character limit, which is useful behavioral context, but doesn't disclose other traits like whether this is a read/write operation, authentication needs, rate limits, or what happens if the limit is exceeded. The description is insufficient for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive ('Sends a push text message to customer' appears twice) and includes irrelevant information about 'referenced_tools' schema and return format that doesn't add value. It's not front-loaded efficiently and contains wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters, 100% schema coverage, and an output schema exists, the description is minimally adequate. However, as a mutation tool with no annotations, it should provide more behavioral context (e.g., permissions, side effects). The character limit is helpful but insufficient for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter info, which is appropriate. Baseline is 4 for zero parameters, as the schema fully covers the absence of inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Sends a push text message to customer' which provides a clear verb ('sends') and resource ('push text message'), but it's repetitive and doesn't distinguish from sibling tools like 'ai__push_message_send' or 'ai__text_message'. The purpose is understandable but lacks differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Push messages are limited to 160 characters per message', which provides a constraint, but offers no guidance on when to use this tool versus alternatives like 'ai__push_message_send' or 'ai__text_message'. There's no explicit when/when-not or alternative tool references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__push_messageCInspect
Sends a push text message to customer. Push messages are limited to 160 characters per messageSends a push text message to customer. Push messages are limited to 160 characters per message This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the character limit (160 characters), which is useful behavioral context. However, it doesn't mention other critical traits like whether this requires authentication, rate limits, error handling, or what happens if the message fails. For a messaging tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and repetitive: it repeats the same sentence twice ('Sends a push text message to customer. Push messages are limited to 160 characters per message') and includes irrelevant information about 'referenced_tools' schema and return format, which should be covered by the output schema. It's not front-loaded and wastes space with redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description doesn't need to explain inputs or return values. However, as a messaging tool with no annotations, it should provide more behavioral context (e.g., permissions, side effects). The character limit is helpful, but overall completeness is minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters (schema description coverage 100%), so there are no parameters to document. The description doesn't need to add parameter semantics, and it appropriately doesn't discuss any. Baseline for 0 parameters is 4, as it avoids unnecessary information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Sends a push text message to customer.' It specifies the verb ('sends') and resource ('push text message'), though it doesn't distinguish it from sibling tools like 'ai__push_message_send' or 'ai__text_message' which might have similar functions. The description is clear but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions a character limit (160 characters) but doesn't specify context, prerequisites, or exclusions. With many sibling tools related to messaging (e.g., 'ai__push_message_send', 'ai__text_message'), the lack of differentiation is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__push_message_devicesCInspect
Returns a list of devices registered to receive push messages on this wallet
Returns a list of devices registered to receive push messages on this wallet. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It describes a read-only operation ('returns a list'), but doesn't mention authentication requirements, rate limits, error conditions, or what happens if parameters are invalid. The description doesn't contradict annotations (since there are none), but provides minimal behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description has redundant repetition ('Returns a list...' appears twice) and includes parameter documentation that could be better structured. However, it's reasonably concise overall and front-loads the main purpose. The @param/@return formatting is clear but could be more integrated with the natural language description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which handles return value documentation) and the description covers all parameters despite 0% schema coverage, this is adequate for a simple list-retrieval tool. However, for a wallet/device management context with authentication parameters, more guidance about authentication flows and error handling would be helpful, especially with no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides parameter documentation for all 3 parameters (api_key, token, wallet_fk) with source information ('allocated to your application', 'provided by /access/login'), which adds significant value beyond the 0% schema description coverage. While it doesn't specify exact formats or constraints, it gives practical guidance on where to obtain these values, compensating well for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'returns a list of devices registered to receive push messages on this wallet', which is a clear verb+resource combination. However, it doesn't distinguish this tool from sibling tools like 'ai__push_message' or 'ai__push_message_send', leaving ambiguity about their different purposes. The description is somewhat vague about what 'this wallet' refers to in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. The description doesn't mention prerequisites beyond the parameters, nor does it explain how this tool relates to other push-related tools in the sibling list. There's no indication of when this tool should be selected over other device or wallet information tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__push_message_sendCInspect
Sends a push message to customer. Push messages are limited to 160 characters per message
Sends a push message to customer. Push messages are limited to 160 characters per message @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param message: The message content in plain text @param title: The message title in plain text
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | ||
| token | Yes | ||
| api_key | Yes | ||
| message | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure. It mentions that 'Push messages are limited to 160 characters per message', which adds useful context about a constraint. However, it lacks details on permissions, rate limits, error handling, or what the 'json object' return entails, making it incomplete for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured, with a redundant repetition of the first sentence and param annotations formatted in a comment-like style that may not be standard. It's front-loaded but wastes space on repetition and could be more streamlined for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage and an output schema exists (implied by '@return: a json object'), the description adds some value by explaining parameters and noting a character limit. However, for a mutation tool with no annotations, it lacks details on authentication needs, side effects, or error cases, making it minimally adequate but with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all 5 parameters with brief explanations (e.g., 'The message content in plain text'), adding meaning beyond the schema's type definitions. This helps clarify what each parameter represents, though it could provide more detail on formats or sources.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Sends a push message to customer' which provides a clear verb ('sends') and resource ('push message'), but it's vague about the specific context (e.g., what platform or system this is for). It repeats the same sentence, adding no clarity, and does not distinguish from sibling tools like 'ai__push' or 'ai__push_message', which might have overlapping functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives, such as sibling tools 'ai__push' or 'ai__push_message'. The description only states what the tool does without indicating prerequisites, constraints, or scenarios for its use, leaving the agent to infer usage from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__reaCInspect
Provides tools to send text messages and emails This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. The description mentions 'Provides tools' and 'provides reference information' but doesn't clarify whether this is a read-only reference tool, an action tool, or something else. It doesn't disclose permissions needed, rate limits, or what the actual behavior is beyond vague statements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is only three sentences but contains redundancy ('Provides tools' repeated) and confusing structure. The second sentence about 'referenced_tools' schema is awkwardly phrased and doesn't flow logically from the first. The @return notation is unconventional and adds noise rather than clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity implied by the tool name 'ai__rea' and the many sibling tools for messaging, this description is inadequate. While there's an output schema, the description doesn't explain what this tool actually does - is it a catalog of messaging tools? A reference guide? The purpose remains unclear despite the existence of structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description doesn't need to explain parameters since there aren't any, and the schema coverage is complete. The mention of 'referenced_tools' schema provides some context about what might be returned.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Provides tools to send text messages and emails' which gives a general purpose, but it's vague about what 'tools' means and doesn't specify what this particular tool does versus its siblings like 'ai__send_email' or 'ai__text_message'. The second sentence about providing reference information adds confusion rather than clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. With siblings like 'ai__send_email' and 'ai__text_message' that appear to handle specific messaging functions, the description provides no context about when this tool is appropriate versus those more specific tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__realtime_event_alertsCInspect
Sends a push text message to customer. Push messages are limited to 160 characters per messageSends a push text message to customer. Push messages are limited to 160 characters per message This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the 160-character limit (a behavioral constraint) and mentions that it 'sends a push text message' (implying a write operation). However, it lacks details on permissions, rate limits, delivery confirmation, or error handling that would be important for a messaging tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with redundant repetition of the same sentence, wasting space. The third sentence about 'referenced_tools' schema is cryptic and doesn't clearly relate to the tool's functionality. The @return statement is minimal but could be integrated better.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a messaging tool with no annotations, 0 parameters, and an output schema exists (so return values are documented elsewhere), the description is minimally adequate. It covers the basic action and a key constraint (character limit), but lacks context about authentication, delivery mechanisms, or integration with sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter information, but that's appropriate given the schema completeness. The baseline for 0 parameters is 4, as the description doesn't need to compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'sends a push text message to customer' which provides a clear verb+resource combination. However, it doesn't distinguish this tool from sibling tools like 'ai__push_message_send' or 'ai__text_message' that might have similar functionality, and the repetition of the same sentence suggests vagueness rather than emphasis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions a character limit (160 characters) which is a constraint, but doesn't specify use cases, prerequisites, or exclusions compared to other messaging tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__send_emailCInspect
Sends email to the wallet owner
Sends email to the wallet owner @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param message: The message body in html, encoded into base64 @param subject: The message subject in plain text, url encoded
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| message | Yes | ||
| subject | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It states the action 'sends email' but lacks critical behavioral details: whether this is a read-only or mutating operation (implied mutation but not confirmed), authentication requirements beyond parameters, rate limits, error handling, or what happens on success/failure. The description is minimal and doesn't provide enough context for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but wastes space by repeating 'Sends email to the wallet owner'. The param annotations are structured but could be more integrated. Overall, it's moderately concise but could be tighter by eliminating redundancy and better organizing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage and no annotations, the description partially compensates with param details. An output schema exists, so return values needn't be explained. However, for a mutation tool (implied by 'sends'), the description lacks behavioral context like side effects, permissions, or error cases, making it incomplete for safe use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides param annotations that explain each parameter's purpose and formatting (e.g., 'message body in html, encoded into base64', 'subject in plain text, url encoded'), adding significant meaning beyond the bare schema. However, it doesn't clarify parameter relationships or dependencies, such as how 'token' and 'wallet_fk' relate to authentication.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Sends email to the wallet owner', which provides a basic verb+resource combination. However, it's vague about the email's purpose or context, and it doesn't distinguish this tool from potential siblings like 'ai__email' or 'ai__text_message' that might handle other communication types. The repetition of the same phrase adds no clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., requiring prior authentication via '/access/login'), nor does it specify scenarios where email is preferred over other communication methods like push notifications or SMS. Without such context, an agent must infer usage from parameter names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__SEPACInspect
Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool 'returns a list' and 'provides reference information', implying a read-only operation, but doesn't address potential side effects, permissions, rate limits, or error handling. This is a significant gap for a tool in a financial/blockchain context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences but contains redundancy (e.g., repeating 'tools' and 'schema') and awkward phrasing ('This tools provides'). It could be more streamlined, though it does front-load the main purpose effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is minimally adequate. However, for a tool in a complex domain with many siblings, it lacks context on how it fits into the broader workflow or what specific 'reference information' entails, leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the schema fully documents the lack of inputs. The description adds value by clarifying that the tool 'provides reference information in the "referenced_tools" schema', which helps the agent understand the output structure beyond what the schema alone indicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts' which provides a clear verb ('returns') and resource ('list of tools'). However, it doesn't distinguish itself from sibling tools like 'ai__bridges', 'ai__off_ramps', or 'ai__on_ramps' which appear to serve related functions, making the purpose somewhat vague in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions that it 'provides reference information' but doesn't specify scenarios, prerequisites, or exclusions compared to sibling tools like 'ai__bridges' or 'ai__virtual_accounts', leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__sessionAInspect
Returns a session token given a customer provided session_key Can only be called once per session_key, thereafter the session_key is invalid @param session_key: The customer provided session_key @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| session_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the operation returns a token, is single-use per session_key (making the key invalid thereafter), and requires a customer-provided key. It doesn't cover error handling, rate limits, or authentication needs, but adds meaningful context beyond basic function. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by behavioral constraint and parameter/return details. It uses three concise sentences with no wasted words, efficiently conveying key information. However, the @param and @return annotations could be integrated more smoothly, slightly affecting structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (returning a JSON object), the description doesn't need to detail return values. It covers the purpose, single-use behavior, and parameter semantics adequately for a token-generation tool with no annotations. Minor gaps include lack of error cases or security context, but it's largely complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds semantics for the single parameter 'session_key', explaining it's 'customer provided' and essential for token generation. This clarifies the parameter's role beyond the schema's type definition. Since there's only one parameter, the description adequately covers its meaning, though more detail on format could improve it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Returns') and resource ('a session token'), specifying it's based on a 'customer provided session_key'. It distinguishes from sibling 'ai__session_2_token' by focusing on token generation from a key, though not explicitly contrasting them. The purpose is specific but could be more distinct regarding sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a session token is needed from a session_key, with a constraint that it 'Can only be called once per session_key'. However, it lacks explicit guidance on when to use this versus alternatives like 'ai__session_2_token' or other authentication tools, and no prerequisites or exclusions are mentioned. Usage is contextually implied but not fully articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__session_2_tokenAInspect
Returns a session token given a customer provided session_key Can only be called once per session_key, thereafter the session_key is invalid @param session_key: The customer provided session_key @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| session_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the tool returns a session token (implying a read operation for authentication), and it has a critical constraint—the session_key becomes invalid after one use. This adds valuable context beyond basic functionality, though it doesn't cover aspects like error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: two sentences for core functionality and constraints, plus @param and @return annotations. Every sentence earns its place—no fluff or redundancy. It's front-loaded with the main purpose, making it easy to scan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (authentication with a one-time key), no annotations, and an output schema present, the description does well. It explains the purpose, usage constraint, and parameter, and notes the return is a JSON object. The output schema likely covers return values, so the description doesn't need to detail them. It's mostly complete but could benefit from more parameter details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It includes a @param annotation explaining 'session_key: The customer provided session_key,' which adds meaning beyond the bare schema. However, it doesn't detail format, length, or source of the session_key, leaving some gaps. With one parameter partially documented, a baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a session token given a customer provided session_key.' It specifies the verb ('returns'), resource ('session token'), and input dependency ('given a session_key'). However, it doesn't explicitly differentiate from sibling tools like 'ai__session' or other authentication-related tools, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Can only be called once per session_key, thereafter the session_key is invalid.' This indicates when not to use it (after first call) and implies a one-time authentication flow. It doesn't explicitly mention alternatives or prerequisites, but the constraint is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__signupDInspect
Provides tools to signup a new customer This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It vaguely mentions 'Provides tools' (plural) and 'reference information in the "referenced_tools" schema' but doesn't clarify what this means operationally. It doesn't disclose whether this is a read or write operation, what permissions are needed, what side effects occur, or how the signup process works. The @return note adds minimal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (3 sentences) but poorly structured. The first sentence is tautological, the second is confusing about 'referenced_tools', and the third about @return is minimal. While not verbose, the sentences don't efficiently convey useful information - they're under-specified rather than concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a signup tool (typically a write/mutation operation) with no annotations, 0% schema coverage, but with an output schema, the description is inadequate. It doesn't explain what the tool actually does, what the signup process entails, what the api_key is for, or what the output contains. The output schema existence means return values don't need explanation, but the core functionality remains obscure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides no information about the single parameter 'api_key' - not its purpose, format, or source. The mention of 'referenced_tools' schema is confusing and doesn't clarify parameter usage. With 1 undocumented parameter, the description adds almost no semantic value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Provides tools to signup a new customer' which is a tautology that essentially restates the tool name 'ai__signup'. It doesn't specify what 'signup' entails (e.g., creates account, registers user, sets up credentials) or what resources are involved. While it mentions 'customer', it doesn't differentiate this from sibling tools like 'ai__automated_agent_signup' or 'ai__automated_signup'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, appropriate contexts, or exclusions. With many sibling tools related to account management (e.g., 'ai__account', 'ai__accounts', 'ai__access_platform_assign'), there's no indication of when this specific signup tool should be selected over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__skillBInspect
Returns the latest version of the Netfluid SKILL.md file
Returns the latest version of the Netfluid SKILL.md file Update your files if the version number is greater and refresh all mcp tools
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes a read operation ('Returns') which implies it's non-destructive, but doesn't mention authentication requirements, rate limits, or what happens when invoked. The second part about updating files and refreshing tools adds some behavioral context beyond the basic return statement, but remains vague about implementation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but contains redundant repetition ('Returns the latest version...' appears twice). The second sentence adds value by suggesting actions based on the returned data, but the structure could be improved by combining these ideas more efficiently. It's appropriately sized for a simple tool but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a simple read operation with 0 parameters, no annotations, but with an output schema (which handles return value documentation), the description provides adequate context. It explains what the tool does and suggests how to use the returned information. For a tool of this complexity, the description is reasonably complete, though it could benefit from more specific behavioral details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the empty parameter set. The description appropriately doesn't discuss parameters since none exist. It earns a 4 because it doesn't waste space on non-existent parameters while focusing on the tool's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Returns the latest version of the Netfluid SKILL.md file', which is a clear verb+resource combination. However, it doesn't distinguish this tool from its many siblings (like ai__about, ai__help_ping, ai__support) that might also provide documentation or help information. The purpose is understandable but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Update your files if the version number is greater and refresh all mcp tools', which is an action suggestion rather than usage context. There's no indication of prerequisites, when this tool is appropriate, or what alternatives exist among the 100+ sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__supportCInspect
How to contact Netfluid's customer support This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds some behavioral context beyond the empty annotations. It mentions that the tool 'provides reference information' and specifies the return format ('a json object'), which helps the agent understand this is an informational lookup tool rather than an action tool. However, it doesn't clarify what kind of support information is provided or how comprehensive it is.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but somewhat awkwardly structured. The first sentence is a fragment ('How to contact...') rather than a complete description. The second sentence mixes tool behavior with implementation details ('in the "referenced_tools" schema'). While concise, the structure could be clearer and more front-loaded with the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this tool has 0 parameters, an output schema exists, and annotations are empty, the description provides a basic understanding but leaves gaps. It doesn't clearly explain what specific support information is returned or how this differs from other help/contact tools. The mention of a JSON return format is helpful given the output schema, but more context about the content would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description doesn't need to explain parameters, and it correctly indicates this is a parameterless tool that returns information. The mention of 'referenced_tools' schema in the description adds some context about the return structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'How to contact Netfluid's customer support' which provides some purpose but is vague about what the tool actually does. It doesn't specify a clear verb+resource combination like 'retrieve support contact information' or 'display support options.' The second sentence mentions 'reference information in the "referenced_tools" schema' but this is confusing rather than clarifying.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. With many sibling tools available (including ai__help_ping, ai__contact, ai__email, and ai__text_message), the description doesn't help the agent understand when this specific support tool is appropriate versus other communication or help-related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__telco_accountsCInspect
Lists all the available telecommunications accounts
Lists all the available telecommunications accounts @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states this is a list operation, implying it's read-only and non-destructive, but doesn't explicitly confirm this. It mentions authentication parameters but doesn't explain rate limits, pagination, or what happens if authentication fails. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has redundant repetition ('Lists all the available telecommunications accounts' appears twice). The parameter documentation is clear but could be more integrated. Overall, it's adequately structured but not optimally efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
There's an output schema present, so the description doesn't need to explain return values. The description covers the basic purpose and parameters well, but for a tool with no annotations and many sibling tools, it lacks guidance on usage context and behavioral details. It's minimally complete but could be more helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description compensates by documenting all three parameters with @param annotations that explain their purpose and source (e.g., 'The api key allocated to your application', 'The wallet_api_token provided by /access/login'). This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Lists all the available telecommunications accounts' which clearly indicates a read/list operation. However, it doesn't distinguish this tool from sibling tools like 'ai__accounts' or 'ai__wallet_accounts_list' which might serve similar purposes. The verb 'Lists' is specific, but the scope differentiation is missing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools that might handle accounts or telecommunications (e.g., 'ai__accounts', 'ai__account', 'ai__telco_bundles'), there's no indication of when this specific tool is appropriate versus those others. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__telco_bundlesBInspect
Lists all the available telecommunications bundles
Lists all the available telecommunications bundles @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It implies a read-only list operation ('Lists all'), which suggests non-destructive behavior, but doesn't disclose authentication needs beyond the api_key parameter, rate limits, pagination, or what 'all' entails (e.g., if filtered by region). It adds minimal behavioral context beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but includes redundant repetition ('Lists all the available telecommunications bundles' appears twice). The param and return annotations are useful but could be integrated more smoothly. Overall, it's brief but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (implied by 'Has output schema: true'), the description doesn't need to detail return values. It covers the basic purpose and parameter semantics adequately for a simple list tool, though it lacks behavioral details like pagination or error handling. With annotations empty, it's reasonably complete but could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by documenting the single parameter: '@param api_key: The api key allocated to your application'. This clarifies the purpose of the api_key beyond just being a required string, though it doesn't specify format or source. With 0% coverage, this is good but not exhaustive (e.g., no details on key format).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Lists all the available telecommunications bundles' which clearly indicates a read/list operation. However, it's somewhat vague about what 'telecommunications bundles' specifically are (e.g., service plans, data packages) and doesn't differentiate from sibling tools like 'ai__telco_accounts' or 'ai__account_buy_telco' that might handle related telco operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With many sibling tools (e.g., 'ai__telco_accounts', 'ai__account_buy_telco'), the description doesn't clarify if this is for browsing available plans versus managing existing accounts or purchasing bundles, leaving the agent to guess based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__termsBInspect
Provides a url from which to fetch Netfluid's terms and conditions of service This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool provides a URL (implying a read-only, non-destructive operation) and returns a JSON object, which adds basic behavioral context. However, it lacks details on authentication needs, rate limits, or error handling, which are important for a tool in this domain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, but the second sentence is redundant ('This tools provides reference information...') and adds little value beyond the first. The structure is front-loaded with the main purpose, but the repetition and minor grammatical error ('tools' instead of 'tool') reduce efficiency, making it somewhat verbose for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no annotations) and the presence of an output schema (which handles return values), the description is mostly complete. It explains what the tool does and the return format. However, it could benefit from more context on when to use it relative to siblings, but the existing information suffices for basic operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the schema fully documents that no inputs are required. The description doesn't add parameter information, but since there are no parameters, this is acceptable. The baseline for 0 parameters is 4, as the description doesn't need to compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Provides a url from which to fetch Netfluid's terms and conditions of service', which clearly identifies the resource (terms and conditions) and the action (providing a URL). However, it doesn't differentiate this from sibling tools like 'ai__privacy' or 'ai__about' that might provide similar reference information, making the purpose somewhat vague in context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'This tools provides reference information in the "referenced_tools" schema', which implies usage in a specific context but doesn't explicitly state when to use this tool versus alternatives like 'ai__privacy' or 'ai__about'. No guidance on prerequisites, timing, or exclusions is provided, leaving the agent with minimal direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__textBInspect
Sends a push text message to customer. Push messages are limited to 160 characters per messageSends a push text message to customer. Push messages are limited to 160 characters per message This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses a key behavioral trait: 'Push messages are limited to 160 characters per message,' which is useful for understanding constraints. However, it doesn't mention other important aspects like authentication needs, rate limits, error handling, or what 'push' entails operationally. The description adds some value but is incomplete for a messaging tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and repetitive: it repeats 'Sends a push text message to customer. Push messages are limited to 160 characters per message' twice, wasting space. The sentences about 'referenced_tools' and '@return' are vague and don't add clarity. It lacks front-loading of key information and includes redundant text, reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (indicated by 'Has output schema: true'), the description doesn't need to explain return values. However, for a tool that sends messages, it should cover more behavioral context (e.g., delivery guarantees, target audience). The character limit is helpful, but without annotations and with minimal guidance, it's only partially complete for its function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, and schema description coverage is 100%, meaning there are no parameters to document. The description doesn't need to add parameter semantics, so it meets expectations by not introducing confusion. However, it mentions 'referenced_tools' schema without explaining its relevance, which is slightly extraneous but not detrimental.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Sends a push text message to customer.' This specifies the verb ('sends') and resource ('push text message'), making it understandable. However, it doesn't distinguish this tool from sibling tools like 'ai__text_message' or 'ai__push_message_send', which appear to have similar functions, so it doesn't achieve full differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions a character limit (160 characters), but doesn't specify scenarios, prerequisites, or exclusions. With many sibling tools that seem related (e.g., 'ai__text_message', 'ai__push_message_send'), the lack of comparative context leaves usage ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__text_messageCInspect
Sends a push text message to customer. Push messages are limited to 160 characters per message
Sends a push text message to customer. Push messages are limited to 160 characters per message This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the 160-character limit and that it 'sends a push text message,' implying a write operation, but lacks details on permissions, rate limits, error conditions, or what 'push' entails operationally. This is insufficient for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive ('Sends a push text message to customer' appears twice) and includes extraneous information about 'referenced_tools' schema and '@return' that doesn't clarify the tool's purpose or usage. It's not front-loaded effectively and wastes space on redundant or irrelevant details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description's job is lighter. It covers the basic action and a constraint (character limit), but as a mutation tool with no annotations, it should provide more behavioral context (e.g., side effects, auth needs). The output schema mitigates some gaps, but overall completeness is minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate here. A baseline of 4 is given since the schema fully covers the parameters (none), and the description doesn't need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Sends a push text message') and target ('to customer'), with specific resource details ('push messages are limited to 160 characters per message'). However, it doesn't explicitly differentiate from sibling tools like 'ai__text' or 'ai__account_send_sms', which might offer similar functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions a character limit but doesn't specify prerequisites, exclusions, or compare to related tools like 'ai__text' or 'ai__account_send_sms', leaving the agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__virtual_accountCInspect
Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool returns a list and provides reference information, which implies a read-only operation, but doesn't disclose any behavioral traits like whether it requires authentication, has rate limits, returns paginated results, or has any side effects. The mention of '@return' format is helpful but insufficient for behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise at two sentences, but the structure could be improved. The first sentence is somewhat awkwardly phrased and could be more front-loaded. The '@return' notation feels like implementation detail rather than user-focused description. While not verbose, it's not optimally structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a reference tool with 0 parameters and an output schema exists, the description provides basic information about what it returns. However, for a tool in a complex financial/blockchain context with many sibling tools, it doesn't sufficiently explain its role or value. The output schema existence reduces the burden, but more context about when and why to use this tool would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% schema description coverage, so the description doesn't need to compensate for missing parameter documentation. The description correctly indicates this is a parameterless tool that returns reference information. The baseline for 0 parameters with high schema coverage would be 4, as the description appropriately doesn't waste space on non-existent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts' and provides reference information. This gives a general purpose but is somewhat vague about what specific resource is being returned. It doesn't clearly distinguish this from sibling tools like 'ai__bridges', 'ai__off_ramps', or 'ai__virtual_accounts' which might serve similar purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools related to bridges, accounts, and virtual accounts, there's no indication of when this reference tool is appropriate versus the specific operational tools. The description mentions what it returns but not the context for using it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__virtual_accountsBInspect
Returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It states the tool returns a list and provides reference information, which suggests a read-only operation. However, it doesn't disclose important behavioral traits like whether this requires authentication, has rate limits, returns paginated results, or what format the 'json object containing the schema' actually contains. The description adds some context about the reference nature but leaves key behavioral questions unanswered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise at three sentences, but the structure could be improved. The first sentence is somewhat long and lists multiple concepts without clear organization. The second sentence about providing reference information is useful but could be integrated better. The third sentence about the return format is helpful but feels tacked on rather than flowing naturally.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this tool has 0 parameters, no annotations, but does have an output schema, the description provides adequate context. It explains what the tool returns (a list of tools related to specific domains) and mentions the reference nature and return format. The output schema existence means the description doesn't need to detail return values, making this reasonably complete for a simple reference tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% schema description coverage, so the baseline would be 4 even with no parameter information in the description. The description doesn't discuss parameters, which is appropriate since there are none. It correctly focuses on what the tool returns rather than what it accepts.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'returns a list of tools that work with bridges, off-ramps, on-ramps, cross-blockchain transfers, virtual accounts' which provides a general purpose, but it's somewhat vague about what kind of list this is (reference information vs operational tools). It doesn't clearly distinguish this from sibling tools like 'ai__bridges', 'ai__off_ramps', or 'ai__on_ramps' which might return actual bridge/ramp data rather than tool references.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools related to bridges, ramps, and accounts, there's no indication whether this should be used for discovery, reference lookup, or as a starting point for those operations. The description mentions it 'provides reference information' but doesn't clarify when that reference information is needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_accounts_listBInspect
Provides a detailed list of accounts in a wallet.
Provides a detailed list of accounts in a wallet. Each entry returns an account_fk for later use. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It states the tool provides a 'detailed list' and returns 'account_fk for later use', which adds some behavioral context. However, it doesn't disclose critical traits like whether it's read-only (implied by 'list' but not explicit), pagination, rate limits, or authentication requirements beyond the parameters. This leaves gaps in transparency for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy: the first sentence is repeated verbatim. It front-loads the purpose, but the parameter explanations are brief and could be more structured. Overall, it's adequate in length but could be trimmed for better efficiency without losing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the context: 3 parameters with 0% schema coverage, no annotations, and an output schema exists (so return values are documented elsewhere), the description does a reasonable job. It explains the purpose and parameters, and hints at the output ('a json object' with 'account_fk'). However, it lacks details on behavioral aspects like error handling or usage constraints, which would improve completeness for a tool with no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description compensates by listing each parameter with brief semantics: 'api_key: The api key allocated to your application', 'token: The wallet_api_token provided by /access/login', and 'wallet_fk: The wallet_fk provided by /access/login'. This adds meaningful context beyond the schema's type definitions, though it could be more detailed (e.g., format or examples).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Provides a detailed list of accounts in a wallet.' It specifies the verb ('list') and resource ('accounts in a wallet'), making the action clear. However, it doesn't explicitly differentiate from its sibling 'ai__wallet_accounts_list_verbose', which appears to be a similar list tool, so it misses the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions that 'Each entry returns an account_fk for later use,' which hints at a use case, but doesn't specify when to choose this over other list tools like 'ai__wallet_accounts_list_verbose' or 'ai__accounts'. No explicit when/when-not statements or prerequisites are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_accounts_list_verboseBInspect
Provides a detailed list of accounts in a wallet with crypto balances included.
Provides a detailed list of accounts in a wallet with crypto balances included. This may take some time to complete. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses that the operation 'may take some time to complete,' which is useful behavioral context about potential latency. However, it doesn't mention other traits like whether it's read-only, requires authentication, has rate limits, or what happens on errors. The description adds some value but lacks comprehensive behavioral disclosure for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy: the first two sentences are identical, which is wasteful. It front-loads the purpose but includes repetitive text. The parameter annotations are structured but could be more integrated. Overall, it's somewhat efficient but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (list operation with 3 parameters), no annotations, and an output schema exists (implied by '@return: a json object'), the description is reasonably complete. It covers the purpose, parameters, and a behavioral note, and the output schema handles return values. However, it could improve by addressing sibling differentiation and more detailed parameter semantics to fully compensate for the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description lists three parameters (api_key, token, wallet_fk) with brief explanations (e.g., 'The api key allocated to your application'), adding basic semantics. However, it doesn't fully compensate for the coverage gap—it lacks details on formats, constraints, or examples. With 0% coverage, a baseline of 3 is appropriate as it provides some but incomplete parameter information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Provides a detailed list of accounts in a wallet with crypto balances included.' This specifies the verb ('list'), resource ('accounts in a wallet'), and key detail ('crypto balances included'). However, it doesn't explicitly differentiate from its sibling 'ai__wallet_accounts_list' (without 'verbose'), which appears to be a similar list tool, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'This may take some time to complete,' which hints at performance considerations but doesn't specify scenarios or compare with other tools like 'ai__wallet_accounts_list' or 'ai__accounts'. No explicit when/when-not instructions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_assets_listBInspect
Provides a detailed list of assets available to this wallet. For display only. May be filtered by blockchain
Provides a detailed list of assets available to this wallet as async defined per user grouping and optionally per blockchain. This is for display purposes only as every wallet may still make use of any asse @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param blockchain_fk: The blockchain_fk to use as filter
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| blockchain_fk | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses that the tool is for 'display only' (implying read-only, non-destructive behavior) and mentions async grouping, but lacks details on permissions, rate limits, error handling, or response structure. This provides basic safety context but misses deeper behavioral traits needed for robust agent use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive and poorly structured, with duplicated phrases ('Provides a detailed list...') and incomplete sentences ('as every wallet may still make use of any asse'). It's not front-loaded efficiently, wasting space on redundancy instead of delivering clear, concise information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no annotations, but has an output schema), the description covers the core purpose, parameters, and basic behavior. The output schema existence means return values don't need explanation, but the description could better address usage context and behavioral details to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists all four parameters with brief explanations (e.g., 'api_key: The api key allocated to your application'), adding semantic meaning beyond the schema's type definitions. However, it doesn't clarify format details (e.g., integer ranges for wallet_fk) or usage nuances, leaving some gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Provides a detailed list of assets available to this wallet' with optional blockchain filtering. It specifies the verb ('list') and resource ('assets'), but does not explicitly differentiate from sibling tools like 'ai__wallet_accounts_list' or 'ai__crypto_balance', which might list related but different resources, leaving some ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'For display only' and mentioning filtering by blockchain, but it does not explicitly guide when to use this tool versus alternatives. For example, it doesn't compare to 'ai__wallet_accounts_list' or 'ai__crypto_balance', leaving the agent to infer based on the resource type (assets vs. accounts vs. balances).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_card_listBInspect
Returns a list of tokenised Visa/Mastercard(s) associated with this wallet
Returns a list of tokenised Visa/Mastercard(s) associated with this wallet. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must fully disclose behavioral traits. It states the tool returns a list, implying a read-only operation, but does not specify if it's safe, if it requires authentication, or any rate limits. The mention of parameters like 'api_key' and 'token' hints at authentication needs, but this is not explicitly stated. Without annotations, the description provides minimal behavioral context, falling short of what's needed for a mutation or sensitive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but it includes redundant repetition ('Returns a list...' appears twice) and parameter details that could be more integrated. It is appropriately sized but not optimally structured, with some waste in duplication and parameter formatting that could be streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (list retrieval with authentication parameters), annotations are empty, but the description covers the purpose and parameters well. Since an output schema exists ('Has output schema: true'), the description does not need to explain return values. It provides sufficient context for the agent to understand what the tool does and what inputs are required, though it lacks usage guidelines and deeper behavioral details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining each parameter: 'api_key: The api key allocated to your application', 'token: The wallet_api_token provided by /access/login', and 'wallet_fk: The wallet_fk provided by /access/login'. This clarifies the source and purpose of all three parameters, effectively compensating for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a list of tokenised Visa/Mastercard(s) associated with this wallet.' It specifies the verb ('Returns'), resource ('list of tokenised Visa/Mastercard(s)'), and scope ('associated with this wallet'). However, it does not explicitly differentiate from sibling tools like 'ai__wallet_accounts_list' or 'ai__wallet_assets_list', which might list other wallet-associated items, so it lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites beyond the parameters, nor does it compare to sibling tools such as 'ai__wallet_accounts_list' or 'ai__wallet_card_remove'. This leaves the agent without clear context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_card_removeCInspect
Removes a tokenised Visa/Mastercard(s) associated with this wallet, confirm (yes/no) before executing
Removes a tokenised Visa/Mastercard(s) associated with this wallet. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param wallet_card_id: The wallet_card_id provided by /wallet/card_list
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| wallet_card_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a confirmation step ('confirm (yes/no) before executing'), which adds some context about safety or user interaction. However, it lacks details on permissions, side effects (e.g., irreversible deletion), rate limits, or error handling, making it insufficient for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy (the first two sentences repeat the same core action). The parameter list is structured with @param tags, which is clear, but the overall text could be more streamlined by removing duplication and integrating the confirmation note more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation with 4 parameters, no annotations, but an output schema exists), the description is partially complete. It covers the basic action and parameters but lacks behavioral details like permissions or side effects. The presence of an output schema means return values are documented elsewhere, so the description doesn't need to explain them, but it still falls short in providing full context for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description lists each parameter with brief source hints (e.g., 'provided by /access/login'), adding some semantics beyond the bare schema. However, it does not explain the meaning, format, or constraints of parameters like 'api_key' or 'wallet_card_id,' failing to fully compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Removes') and the target ('tokenised Visa/Mastercard(s) associated with this wallet'), making the purpose specific and understandable. However, it does not explicitly differentiate from potential sibling tools like 'beneficiary_remove' or 'wallet_google_auth_verify', which might handle different removal operations, leaving room for slight ambiguity in sibling context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage guideline with 'confirm (yes/no) before executing,' which implies a cautionary context for use. However, it does not specify when to use this tool versus alternatives (e.g., other removal tools in the sibling list) or any prerequisites beyond the parameters, leaving the guidance somewhat implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_domainCInspect
Returns the wallet white label domain.
Returns the wallet white label domain. Performs a search on any one of the parameters. @param api_key: The api key allocated to your application @param wallet_fk: The wallet_fk @param account_fk: The account_fk @param account_address: The account address @param payat_reference: The payat_reference excluding the merchant code example:111249 would submit 9
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | ||
| wallet_fk | No | ||
| account_fk | No | ||
| account_address | No | ||
| payat_reference | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a read operation ('Returns'), but doesn't mention authentication requirements beyond the api_key parameter, rate limits, error conditions, or what happens when multiple search parameters are provided. The description is minimal and doesn't provide adequate behavioral context for a tool with 5 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise at 4 sentences, but has structural issues. It repeats the first sentence verbatim, wasting space. The parameter documentation uses inconsistent formatting and the @return statement is vague ('a json object'). While not excessively long, the repetition and poor parameter documentation reduce its effectiveness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters, no annotations, and complex sibling relationships, the description is inadequate. It doesn't explain what a 'wallet white label domain' is, how the search logic works when multiple parameters are provided, what the return structure contains, or how this differs from other wallet tools. The existence of an output schema helps, but the description should provide more context about the tool's role and behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 5 parameters (1 required, 4 optional), the description attempts to document all parameters but does so poorly. The parameter explanations are minimal and unhelpful - most just restate the parameter name with 'The' added (e.g., 'The wallet_fk', 'The account_fk'). Only 'payat_reference' gets slightly more explanation. The description doesn't clarify relationships between parameters or how the 'search on any one' logic works.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Returns the wallet white label domain' which is a clear verb+resource combination, but it's vague about what 'white label domain' means in this context. It doesn't distinguish this tool from sibling tools like 'ai__wallet_accounts_list' or 'ai__wallet_verify' which also seem to retrieve wallet-related information. The purpose is understandable but lacks specificity about the domain concept.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Performs a search on any one of the parameters' which suggests it can search by different identifiers, but doesn't explain when to choose this tool over other wallet-related tools in the sibling list. There's no mention of prerequisites, alternatives, or specific use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_feeCInspect
Calculates the transaction fee given an amount.
Calculates the transaction fee given an amount. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param amount: The transaction amount
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions authentication parameters (api_key, token, wallet_fk) but doesn't explain why they're needed or what permissions are required. It states 'Calculates' which implies a read-only operation, but doesn't disclose if this has side effects, rate limits, or what happens with invalid inputs. The description adds minimal behavioral context beyond the basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise with the core function stated upfront, but has unnecessary repetition ('Calculates the transaction fee given an amount.' appears twice). The @param/@return annotations are structured but could be more integrated. Overall efficient but with some redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage, the description compensates with param explanations. An output schema exists, so return values don't need description. However, for a fee calculation tool that likely involves business logic and authentication, more context about the calculation method, currency units, or error conditions would be helpful despite the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides param documentation through @param annotations, explaining what each parameter represents (e.g., 'api_key: The api key allocated to your application'). This adds meaningful semantics beyond the bare schema. However, it doesn't explain parameter formats, constraints, or relationships between parameters like token and wallet_fk from /access/login.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Calculates the transaction fee given an amount' which is a clear verb+resource combination. However, it doesn't distinguish this tool from potential siblings like 'ai__fees' or other fee-related tools in the list, making the purpose somewhat generic rather than specific to this particular calculation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'ai__fees' and various transaction tools (e.g., 'ai__account_pay', 'ai__account_send'), there's no indication of whether this is for pre-transaction estimation, post-transaction calculation, or how it differs from other fee-related operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_google_auth_listBInspect
Returns the wallet's Google Authenticator registration.
Returns the wallet's Google Authenticator registration. An object is returned with a Secret, URL, QR-Code. Any of which can be presented to Google Authenticator app in order to setup a time based OTP @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns an object with 'Secret, URL, QR-Code' for Google Authenticator setup, which adds useful context about the output format. However, it lacks details on behavioral traits like error handling, rate limits, or authentication requirements beyond parameter mentions, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy (e.g., repeating 'Returns the wallet's Google Authenticator registration.') and includes param annotations that could be integrated more smoothly. It front-loads the purpose but could be structured better to avoid repetition and improve readability, making it adequate but not optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters, no annotations, but has an output schema), the description is reasonably complete. It explains the purpose, output format, and parameter sources, and the presence of an output schema means return values are documented elsewhere. However, it could benefit from more behavioral context or usage guidelines to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description adds param annotations (@param) explaining that 'api_key' is allocated to the application, 'token' is from '/access/login', and 'wallet_fk' is from '/access/login', which clarifies sources and purposes. However, it does not fully compensate for the coverage gap, as it lacks format specifics or validation rules, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns the wallet's Google Authenticator registration.' It specifies the verb ('returns') and resource ('Google Authenticator registration'), making the action clear. However, it does not explicitly differentiate from sibling tools like 'ai__wallet_google_auth_verify', which might be for verification rather than retrieval, leaving room for improvement in sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions parameters like 'api_key', 'token', and 'wallet_fk' but does not explain the context or prerequisites for invoking it, such as authentication status or tool sequencing. Without explicit usage instructions or exclusions, it offers minimal practical guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_google_auth_verifyAInspect
Verifies an OTP against the customer's Google Authenticator App
Verifies an OTP against the customer's Google Authenticator App. The wallet must be configured to generate OTPs, see /wallet/google_auth_generate. 2FA is not enabled until such time as /wallet/google_ @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param otp: The OTP to test against what is displayed on the Google Authenticator
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| otp | Yes | ||
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It mentions the tool is for OTP verification and links to setup tools, implying a read-only or validation operation without destructive effects. However, it lacks details on behavioral traits like error handling, rate limits, authentication requirements beyond parameters, or what happens on success/failure. The description adds some context but is incomplete for a security-related tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy (repeats the first sentence) and includes truncated text ('/wallet/google_'), which reduces clarity. It front-loads the purpose but could be more streamlined by removing repetition and completing the truncated reference. The @param and @return sections are structured but add bulk without full efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage and an output schema present, the description compensates well by explaining parameters and stating a return type ('a json object'). It covers the core purpose and prerequisites. However, for a tool involving security and 2FA, it lacks details on error responses, success conditions, or integration context, which would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides @param annotations explaining each parameter's purpose and source (e.g., 'api_key: The api key allocated to your application', 'otp: The OTP to test'). This adds meaningful semantics beyond the bare schema, covering all 4 parameters. However, it does not detail formats or constraints (e.g., OTP length), keeping it from a perfect score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Verifies an OTP against the customer's Google Authenticator App.' It specifies the verb ('verifies'), resource ('OTP'), and target ('Google Authenticator App'). However, it does not explicitly differentiate from sibling tools like 'ai__wallet_google_auth_list' or 'ai__wallet_verify', which might handle related authentication tasks, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'The wallet must be configured to generate OTPs, see /wallet/google_auth_generate. 2FA is not enabled until such time as /wallet/google_' (though truncated). This indicates prerequisites and a specific workflow, but it does not explicitly state when not to use this tool or name alternatives among siblings, such as other verification methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_kyc_checkCInspect
Checks whether the wallet/customer has been verified successfully
Checks whether the wallet/customer has been verified successfully @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It only states the basic purpose without mentioning whether this is a read-only check (likely), what authentication is needed beyond the parameters, potential rate limits, or what happens if verification hasn't been attempted. The description doesn't contradict annotations (none exist), but provides minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has redundant repetition ('Checks whether the wallet/customer has been verified successfully' appears twice). The parameter documentation is clear but could be better structured. Overall, it's adequately sized but not optimally organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (so return values are documented elsewhere), no annotations, and 3 parameters with 0% schema coverage, the description does an adequate job. It explains the purpose and parameters, but lacks important context about when to use it, prerequisites, and behavioral characteristics that would be helpful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by documenting all 3 parameters with their purposes and sources. It explains that api_key is 'allocated to your application', token is 'provided by /access/login', and wallet_fk is 'provided by /access/login'. This adds significant value beyond the bare schema, though it doesn't provide format details or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Checks whether the wallet/customer has been verified successfully', which is a clear verb+resource combination. However, it's somewhat vague about what 'verified' means (KYC verification is implied by the name but not explicitly stated), and it doesn't distinguish from sibling tools like 'ai__wallet_kyc_check_lite' or 'ai__wallet_verify'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like needing to have completed a KYC session first), nor does it differentiate from similar sibling tools such as 'ai__wallet_kyc_check_lite' or 'ai__wallet_verify'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_kyc_check_liteCInspect
Checks whether the wallet/customer has performed an ID document scan and AML check
Checks whether the wallet/customer has performed an ID document scan and AML check. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description lacks behavioral details. It doesn't disclose if this is a read-only check, potential side effects, authentication needs beyond parameters, rate limits, or error handling. The vague '@return: a json object' adds little value, failing to compensate for missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but repetitive (first two sentences are identical) and poorly structured. Parameter annotations are included but add clutter without depth. It's front-loaded with the purpose but wastes space on redundancy and incomplete details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage, no annotations, and an output schema (implied by 'Has output schema: true'), the description is incomplete. It doesn't explain the return value's structure or meaning, and parameter guidance is minimal, leaving gaps for a KYC check tool in a complex financial context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must add meaning. It lists parameters with brief notes (e.g., 'api_key allocated to your application'), but these are minimal and don't explain format, sources, or constraints. For 3 undocumented parameters, this is insufficient to guide effective use.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool checks for ID document scan and AML check completion, which is a clear purpose. However, it's vague about what 'checks' entails (e.g., returns status, details, or just boolean) and doesn't distinguish from sibling 'ai__wallet_kyc_check' (without 'lite'), leaving ambiguity about differences.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'ai__wallet_kyc_check' or other KYC-related tools. It mentions parameters from '/access/login' but doesn't specify prerequisites or contexts for invocation, providing minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_kyc_session_createAInspect
Creates a new session for Identity Verification (KYC), only required on wallet that have not been KYC verified. Do not generate a link unless explicitly asked for by the user. There is a cost per session.
Creates a session for Identity Verification. Do not generate a link unless explicitly asked for by the user. This endpoint will return a URL, direct the customer to that URL on their mobile. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param profile: The session profile, values are FULL (ID verification, Face Verification, Liveness check, Email verification, AML) or LITE (ID verification, Face Verification, Email verification, AML) @param callback: The session callback URL which will be called with the verification result. This callback URL must call /callback/didit with the "verificationSessionId" and "status" as received on this callback url
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| profile | Yes | ||
| callback | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It successfully describes key behavioral traits: the tool creates something new (implies mutation), has a cost implication, returns a URL that should be directed to customer's mobile, and requires explicit user request for link generation. It also mentions callback functionality. However, it doesn't cover error conditions, rate limits, or permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description has redundant content (the first two sentences are essentially repeated) and could be more efficiently structured. However, all information is relevant and the parameter documentation is well-organized. The repetition reduces conciseness, but the overall information density is reasonable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter mutation tool with no annotations, the description provides substantial context: purpose, usage constraints, cost implications, behavioral guidance, and detailed parameter semantics. The existence of an output schema means return values don't need explanation. The main gap is lack of error handling or permission requirements information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides excellent parameter semantics. It clearly explains all 5 parameters: api_key ('allocated to your application'), token and wallet_fk ('provided by /access/login'), profile (with FULL/LITE options and what they include), and callback (purpose and required handling). This fully compensates for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Creates a new session for Identity Verification (KYC)'. It specifies the target resource (KYC session) and context (only for unverified wallets). However, it doesn't explicitly differentiate from sibling tools like 'ai__wallet_kyc_check' or 'ai__wallet_kyc_check_lite', which appear to be read operations rather than session creation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'only required on wallet that have not been KYC verified' and 'Do not generate a link unless explicitly asked for by the user'. It also mentions cost implications ('There is a cost per session'). However, it doesn't explicitly state when NOT to use this tool or name alternative tools for related functions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_mnemonicCInspect
Returns the wallet's 24 word mnemonic phrase for future secret recovery.
Returns the wallet's 24 word mnemonic phrase for future secret recovery. It is important that the wallet owner is made aware of this recovery key phrase, for future use, when required @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Returns') and hints at security sensitivity by mentioning the mnemonic phrase is for 'secret recovery' and that the owner should be aware. However, it lacks details on authentication needs (beyond parameters), rate limits, error conditions, or data sensitivity implications, which are critical for a tool handling sensitive recovery information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has redundancy (the first sentence is repeated) and includes parameter annotations that are not well-integrated. The core purpose is stated upfront, but the structure could be improved by eliminating repetition and better organizing the parameter notes. It's not overly verbose, but it doesn't achieve optimal efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (sensitive recovery operation with 3 parameters), no annotations, and an output schema (which handles return values), the description is incomplete. It fails to address key contextual aspects like security warnings, error handling, or how this tool relates to siblings. The output schema mitigates the need to describe return values, but the description still lacks sufficient guidance for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate. It lists the three parameters with brief notes (e.g., 'The api key allocated to your application'), but these notes are vague and do not explain the semantics, sources, or formats beyond what the schema's type hints provide. For example, it doesn't clarify how 'wallet_fk' differs from 'token' or where to obtain them, leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns the wallet's 24 word mnemonic phrase for future secret recovery.' It specifies the verb ('Returns') and resource ('wallet's 24 word mnemonic phrase'), making the action clear. However, it does not explicitly differentiate from sibling tools like 'ai__access_recover' or 'ai__wallet_verify', which might have overlapping recovery-related functions, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance, mentioning only that 'the wallet owner is made aware of this recovery key phrase, for future use, when required.' It does not specify when to use this tool versus alternatives (e.g., other recovery tools in the sibling list), nor does it outline prerequisites or exclusions. This lack of explicit context limits its effectiveness in guiding the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_rbaCInspect
Saves a Recipient Bank Account (RBA) for use on future withdrawals. Confirm (yes/no) before executing
Saves a Recipient Bank Account (RBA) for use on future withdrawals. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param beneficiary: The beneficiary name @param bank: The bank's name @param iban: The account's IBAN/Account number @param swift: The account's swift routing code or branch code which ever is applicable @param email: The recipient's email address @param note: The note or recipient reference on the bank transfer @param address_street: The street address of the bank @param address_city: The city in which the bank is located @param address_zip: The postal code of the location of the bank, zip code or "None" @param countryISO2: The 2-digit ISO code of the country in which the bank is located, must match the content of the swift code
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| bank | Yes | ||
| iban | Yes | ||
| note | No | ||
| Yes | |||
| swift | Yes | ||
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes | ||
| address_zip | No | ||
| beneficiary | Yes | ||
| countryISO2 | No | ||
| address_city | No | ||
| address_street | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool 'saves' data, implying a write operation, but does not disclose critical traits like authentication requirements (beyond listing parameters), potential side effects (e.g., if saving triggers notifications or updates), rate limits, or error handling. The mention of 'Confirm (yes/no) before executing' hints at a safety check but lacks detail on why confirmation is needed or what risks are involved.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: it repeats the purpose line unnecessarily ('Saves a Recipient Bank Account...' appears twice) and includes a verbose list of parameter descriptions that could be streamlined. The front-loaded purpose is clear, but the repetition and lack of grouping for related parameters (e.g., address fields) reduce efficiency, making it somewhat bloated without adding proportional value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (13 parameters, write operation) and the presence of an output schema (indicated by '@return: a json object'), the description is partially complete. It covers the purpose and parameters but lacks context on authentication flow (referencing other tools like 'ai__access_login'), error cases, or behavioral nuances. The output schema handles return values, but without annotations, the description should do more to explain safety and operational context, leaving it adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate by explaining parameters. It provides brief semantics for all 13 parameters (e.g., 'beneficiary: The beneficiary name'), adding meaning beyond the bare schema. However, it does not clarify constraints (e.g., format for 'iban' or 'swift'), dependencies (e.g., 'countryISO2' must match 'swift'), or examples, leaving gaps in practical usage despite covering all parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Saves a Recipient Bank Account (RBA) for use on future withdrawals.' It specifies the verb ('saves') and resource ('Recipient Bank Account'), making the action and target explicit. However, it does not distinguish this tool from potential siblings like 'ai__beneficiary_add' or 'ai__wallet_rba_list', which might handle similar beneficiary or RBA operations, leaving room for ambiguity in sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a usage note: 'Confirm (yes/no) before executing,' which implies a confirmation step but does not provide explicit guidance on when to use this tool versus alternatives. It lacks context on prerequisites (e.g., authentication via other tools like 'ai__access_login'), exclusions, or comparisons to sibling tools such as 'ai__withdraw_to_bank' or 'ai__beneficiary_add', offering minimal actionable guidance for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_rba_listCInspect
List all Recipient Bank Accounts (RBAs) on this wallet
List all Recipient Bank Accounts (RBAs) on this wallet @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden of behavioral disclosure. It states it's a list operation, implying it's likely read-only, but doesn't confirm this or describe any side effects, rate limits, authentication needs beyond parameters, or error conditions. The description adds minimal behavioral context beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, but it repeats the first sentence unnecessarily ('List all Recipient Bank Accounts (RBAs) on this wallet' appears twice). The @param and @return sections are structured but verbose for the content provided. Some sentences could be more efficient, such as combining parameter explanations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters with 0% schema coverage, an output schema exists (so return values are documented), and no annotations, the description partially compensates by listing parameters and stating the return type. However, it lacks details on authentication flow, error handling, and sibling tool differentiation, making it incomplete for a tool in a complex financial context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description includes @param annotations for all three parameters, adding basic semantics like 'api key allocated to your application' and references to other endpoints (e.g., '/access/login'). However, it doesn't explain formats, constraints, or relationships between parameters, leaving gaps in understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'all Recipient Bank Accounts (RBAs) on this wallet', which is specific and unambiguous. However, it doesn't differentiate from sibling tools like 'ai__beneficiaries_list' or 'ai__wallet_accounts_list', which might also list wallet-related entities, so it doesn't fully distinguish from alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites beyond the parameters, nor does it compare to sibling tools like 'ai__beneficiaries_list' or 'ai__wallet_accounts_list' that might list related items. The only implied context is that it's for RBAs on a wallet, but no explicit usage scenarios are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_referral_codeCInspect
Returns this wallet's referral code.
Returns this wallet's referral code. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states it 'Returns this wallet's referral code,' which implies a read-only operation, but doesn't clarify authentication requirements (e.g., that the token must be valid), rate limits, error conditions, or what happens if no referral code exists. The description is minimal and lacks critical behavioral details for a tool with authentication parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose but includes redundant repetition ('Returns this wallet's referral code.' appears twice). The param annotations are useful but could be integrated more seamlessly. It's relatively concise but wastes space with duplication, and the structure is basic without clear sections for purpose, usage, or behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (a simple retrieval with authentication) and the presence of an output schema (which handles return values), the description is moderately complete. It covers the purpose and parameters but lacks behavioral context (e.g., error handling) and usage guidelines. With no annotations and 0% schema coverage, it should do more to explain authentication flows and potential outcomes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description adds param annotations (@param) that explain the source of each parameter (e.g., 'api_key: The api key allocated to your application'), which adds meaningful context beyond the bare schema. However, it doesn't detail format constraints (e.g., token length) or relationships between parameters, leaving some gaps. Given the coverage gap, this provides moderate compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns this wallet's referral code.' It uses a specific verb ('Returns') and resource ('wallet's referral code'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'ai__wallet_accounts_list' or 'ai__wallet_assets_list', which also retrieve wallet-related information but for different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., that the wallet must have a referral code set up) or compare it to sibling tools that might retrieve other wallet metadata. The only implied usage is based on the need for a referral code, but no explicit context or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_verifyAInspect
Verifies the status of the wallet token, while keeping the token session alive.
Verifies the status of the wallet token, while keeping the token session alive. Call this regularly to keep the session token active. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly describes the dual purpose (verification + session keep-alive) and implies this is a read-only operation (verification). However, it doesn't disclose important behavioral aspects like authentication requirements beyond parameters, rate limits, error conditions, or what specific 'status' information is returned. The description adds some value but leaves significant gaps for a tool with security implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has structural issues. The first sentence is duplicated verbatim, which is wasteful. The parameter documentation is clear but could be more integrated. Overall, it's front-loaded with the core purpose, but the repetition and separate param/return sections make it less streamlined than ideal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's security/session management context, 3 parameters, no annotations, and an output schema (which handles return values), the description is moderately complete. It covers the purpose, usage pattern, and parameter origins adequately. However, for a tool dealing with authentication tokens, it should ideally mention security implications, error handling, or what constitutes successful verification beyond just 'keeping the session alive.'
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate for the lack of parameter documentation in the schema. It successfully provides semantic context for all three parameters: api_key ('allocated to your application'), token ('provided by /access/login'), and wallet_fk ('provided by /access/login'). This tells the agent where these values come from and their purpose, though it doesn't provide format details or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Verifies the status of the wallet token, while keeping the token session alive.' It specifies the verb ('verifies') and resource ('wallet token'), and distinguishes it from other tools by mentioning session keep-alive functionality. However, it doesn't explicitly differentiate from potential sibling tools like 'ai__session' or 'ai__access_platform_login' that might also handle tokens.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Call this regularly to keep the session token active.' This indicates when to use the tool (for session maintenance) and implies it should be used periodically rather than as a one-time check. However, it doesn't explicitly state when NOT to use it or name alternative tools for different token-related operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__wallet_voucher_listCInspect
Lists all vouchers kept in this wallet's vault.
Lists all vouchers kept in this wallet's vault. These are vouchers issued by any account_fk in this wallet @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a list operation (implying read-only), but doesn't mention authentication requirements beyond the parameters, rate limits, pagination, or what happens if the wallet has no vouchers. The '@return: a json object' is minimal and doesn't describe the structure or content of the response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with redundant repetition ('Lists all vouchers kept in this wallet's vault.' appears twice). The parameter explanations are minimal but necessary given the 0% schema coverage. The '@return' statement is vague. Overall, it's under-specified rather than concise, with wasted space on repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which handles return values), the description doesn't need to explain the return structure. However, for a tool with 3 required parameters and 0% schema description coverage, the description provides only basic parameter hints and minimal behavioral context. It's adequate as a starting point but leaves significant gaps in understanding how to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists the three parameters with brief explanations, but these explanations are minimal ('The api key allocated to your application', 'The wallet_api_token provided by /access/login', 'The wallet_fk provided by /access/login'). They don't explain format constraints, where to obtain these values, or how they interact. The description adds some meaning but doesn't fully compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Lists all vouchers kept in this wallet's vault' which provides a clear verb ('Lists') and resource ('vouchers'). However, it doesn't distinguish this tool from potential sibling tools like 'ai__account_merchant_voucher_issue' or 'ai__account_merchant_voucher_redeem' that also deal with vouchers. The repetition of the first sentence adds no value.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like needing to authenticate first via /access/login), nor does it differentiate from other voucher-related tools in the sibling list. The only implied context is that it operates on a wallet's vault.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__webhook_pauseCInspect
Pauses the webhook on this wallet from receiving events
Pauses the webhook on this wallet from receiving events @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It states the tool pauses webhook events but lacks details on behavioral traits like whether this is reversible, requires specific permissions, affects other operations, or has side effects. No rate limits, error conditions, or response specifics are mentioned, leaving significant gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose but repeats it unnecessarily. The parameter documentation is structured with @param tags, which is clear, but overall it includes redundant text and could be more streamlined without losing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, 0% schema coverage, and an output schema present, the description covers the purpose and parameters adequately but lacks behavioral details and usage guidelines. It is minimally viable for a tool with three parameters, but gaps in transparency and guidelines reduce completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by documenting all three parameters (api_key, token, wallet_fk) with brief explanations of their sources (e.g., 'provided by /access/login'). This adds meaningful context beyond the bare schema, though it could be more detailed on usage or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Pauses the webhook on this wallet from receiving events', which clearly indicates the action (pause) and resource (webhook on a wallet). However, it does not differentiate from sibling tools like 'ai__webhook_resume' or 'ai__webhook_view', leaving ambiguity about when to use this specific pause function versus other webhook-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'ai__webhook_resume' or 'ai__webhook_set'. The description only repeats the purpose without indicating prerequisites, exclusions, or contextual triggers for pausing a webhook, offering minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__webhook_resumeCInspect
Resumes a webhook
Resumes a webhook @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With empty annotations, the description carries full burden but provides minimal behavioral insight. It mentions authentication parameters (api_key, token, wallet_fk) but doesn't disclose whether this is a mutating operation, what side effects occur, rate limits, or error conditions. The return is vaguely described as 'a json object' without indicating success/failure patterns or data structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but inefficiently structured. It repeats 'Resumes a webhook' twice, wasting space. The parameter documentation is front-loaded but uses minimal annotations (@param) without integration into flowing text. While not verbose, it could be more cohesive and eliminate redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage, no annotations, and an output schema (which reduces need to describe returns), the description is incomplete. It lacks context on webhook identification (which webhook?), resumption effects, prerequisites (e.g., must be paused), and error handling. The sibling tools suggest a webhook management system, but this isn't leveraged for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It documents all three parameters with brief explanations (e.g., 'The api key allocated to your application'), adding basic semantic context beyond schema types. However, it doesn't clarify parameter relationships, validation rules, or example values, leaving significant gaps in understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Resumes a webhook' which is a tautology of the tool name 'ai__webhook_resume'. It repeats the same phrase twice without providing any additional context about what 'resuming' entails or what type of webhook is being resumed. While it distinguishes from siblings like 'ai__webhook_pause', it doesn't clarify the resource scope or operational details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The sibling list includes 'ai__webhook_pause', 'ai__webhook_set', and 'ai__webhook_view', suggesting a webhook management context, but the description doesn't explain the relationship between these tools or when resuming is appropriate (e.g., after pausing, for specific webhook states).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__webhook_setCInspect
Sets a webhook on this wallet
Sets a webhook on this wallet. The webhook URL receives a PUT call containing a JSON object whenever an event in triggered on this wallet. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login @param url: The url on which to receive a PUT containing a JSON object
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must fully disclose behavior. It states the tool 'Sets a webhook' and that the URL receives PUT calls with JSON on events, which implies a write operation. However, it lacks details on permissions needed, whether this overwrites existing webhooks, error handling, or rate limits. The mention of PUT calls and JSON format adds some context but is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: the first sentence is duplicated, and parameter explanations are embedded in the main text without clear separation. It front-loads the purpose but includes repetitive and inline details that could be better organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage and an output schema (implied by '@return: a json object'), the description adequately covers parameter semantics but lacks behavioral details for a mutation tool. It doesn't explain the return value's structure or error cases, relying on the output schema. With no annotations, it should provide more context on side effects or usage scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides inline documentation for all 4 parameters (api_key, token, wallet_fk, url), explaining their purposes and sources (e.g., 'provided by /access/login'). This adds significant meaning beyond the bare schema, though it could include format examples or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Sets a webhook on this wallet' which is a clear verb+resource combination. However, it doesn't distinguish this tool from its siblings like 'ai__webhook_pause', 'ai__webhook_resume', or 'ai__webhook_view', leaving the specific role ambiguous. The first sentence is repeated, adding no value.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'ai__webhook_pause' or 'ai__webhook_view'. The description mentions that webhooks receive PUT calls on events but doesn't specify what triggers this tool's use or any prerequisites beyond the listed parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__webhook_viewCInspect
View the webhook on this wallet
View the webhook on this wallet. @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param wallet_fk: The wallet_fk provided by /access/login
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| wallet_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With empty annotations, the description carries full burden for behavioral disclosure. It states this is a 'view' operation (implying read-only) but doesn't clarify what exactly gets returned about the webhook, whether authentication is required beyond the parameters, potential error conditions, or rate limits. The '@return: a json object' is minimal and unhelpful. For a tool with no annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has structural issues. The first two lines are redundant ('View the webhook on this wallet' repeated). The param annotations are formatted but lack meaningful explanations. While not verbose, the repetition and minimal param explanations prevent higher scoring. The information is front-loaded with the core purpose statement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage, no annotations, and an output schema exists (though unspecified), the description is incomplete. It minimally states the purpose and lists parameters but doesn't adequately explain parameter meanings, return value structure, or behavioral context. For a tool that presumably retrieves webhook configuration in a wallet system, this leaves too many gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description includes param annotations that name the three parameters and provide minimal source hints ('provided by /access/login' for token and wallet_fk), but doesn't explain what these parameters represent, their format constraints, or why they're needed. This adds some value over the bare schema but doesn't adequately compensate for the 0% coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'View the webhook on this wallet' which is a tautology - it essentially restates the tool name 'ai__webhook_view'. While it identifies the resource (webhook) and context (wallet), it doesn't specify what 'view' means operationally (e.g., retrieve configuration, check status, see details) or differentiate from sibling webhook tools like 'ai__webhook_pause', 'ai__webhook_resume', and 'ai__webhook_set'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. The description doesn't mention sibling tools like 'ai__webhook_set' (for creating/updating webhooks) or 'ai__webhook_pause/resume' (for managing webhook state), nor does it specify prerequisites beyond the parameters. There's no context about when this read operation is appropriate versus other webhook-related actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__whoisCInspect
Provides tools to verify Netfluid accounts, Netfluid account addresses, Netfluid vouchers This tools provides reference information in the "referenced_tools" schema @return: a json object
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the tool 'provides tools to verify' and returns 'a json object' but doesn't disclose behavioral traits like whether it's read-only vs. mutating, authentication requirements, rate limits, or what 'verify' entails operationally. The mention of 'referenced_tools' schema is unclear and adds little behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (three sentences) but poorly structured. The first sentence is somewhat redundant ('Provides tools... This tools provides...'), and the second sentence about 'referenced_tools' schema is confusing without context. It's front-loaded with the main purpose but could be more streamlined and clearer.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, no annotations, and an output schema exists (so return values are documented elsewhere), the description is minimally adequate. However, for a tool named 'whois' in a complex financial/account context with many siblings, it should better explain what 'verify' means and how it differs from other tools. The mention of 'referenced_tools' is incomplete without elaboration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters (schema description coverage 100%), so there's no need for parameter details in the description. The description appropriately doesn't discuss parameters, which is correct for a zero-parameter tool. Baseline is 4 when no parameters exist, as it avoids unnecessary information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'provides tools to verify Netfluid accounts, Netfluid account addresses, Netfluid vouchers' which gives a general purpose but is vague about what 'verify' means and doesn't specify a clear verb+resource combination. It's not a tautology (doesn't just restate the name 'ai__whois'), but it's too broad and doesn't distinguish from sibling tools like 'ai__account_info' or 'ai__netfluid_voucher_check'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'reference information in the "referenced_tools" schema' but doesn't explain what that means or how it relates to usage. There's no explicit when/when-not instructions or named alternatives among the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__withdrawCInspect
Returns a list of methods by which an account can be funded This tools provides reference information in the "referenced_tools" schema @return: a json object containing the schema
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It indicates a read-only operation ('Returns a list') and provides reference data, which implies safe, non-destructive behavior. However, it doesn't disclose rate limits, authentication needs, or potential side effects. The mention of 'referenced_tools' schema adds some context about output structure, but behavioral details are minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but poorly structured and confusing. The first sentence is clear, but the second contradicts it by focusing on schema reference instead of tool purpose. The @return note is redundant with the output schema. While short, the sentences don't earn their place effectively, leading to ambiguity rather than clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters, 100% schema coverage, and an output schema, the description is minimally adequate but flawed. The purpose contradiction with the tool name and vague 'reference information' statement leaves gaps in understanding the tool's role. For a simple lookup tool, it meets basic needs but fails to integrate cleanly with the context of sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter count is 0, and schema description coverage is 100%, so no parameters need documentation. The description doesn't add parameter details, but with zero parameters, a baseline of 4 is appropriate as there's nothing to compensate for. The mention of 'referenced_tools' schema hints at output structure, which is marginally relevant but not about inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Returns a list of methods by which an account can be funded', which is a clear purpose but contradicts the tool name 'withdraw' (funding implies deposit/credit, not withdrawal). The description also mentions 'reference information in the "referenced_tools" schema', which adds confusion rather than clarity. It fails to distinguish from obvious sibling tools like ai__fund, ai__on_ramps, or ai__account_buy that handle funding methods.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives is provided. The description mentions 'reference information' but doesn't specify use cases like planning deposits, checking available options, or prerequisites. With many sibling tools related to funding and withdrawals, this lack of differentiation leaves the agent guessing about appropriate contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__withdraw_ott_providers_listBInspect
Returns a list of supported providers.
Step 1. Returns a list of supported providers, an array of provider_id is returned, select one and use with /withdraw/to_ott_quote. @param api_key: The api key allocated to your application
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the return format ('array of provider_id' and 'json object'), which is helpful. However, it doesn't disclose behavioral traits like rate limits, authentication needs beyond the api_key parameter, or whether this is a read-only operation. The description adds some value but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat structured with a numbered step and parameter annotation, but it's repetitive ('Returns a list of supported providers' appears twice). The information is front-loaded, but the repetition wastes space. It could be more concise while maintaining clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (true), the description doesn't need to explain return values in detail. It mentions the return includes 'an array of provider_id' and 'a json object', which complements the output schema. For a simple list-retrieval tool with 1 parameter and output schema, the description is reasonably complete, though it could benefit from more behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description explicitly documents the 'api_key' parameter with a brief explanation ('The api key allocated to your application'), which adds meaningful semantics beyond the bare schema. However, with only 1 parameter out of 1 documented, this is adequate but minimal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Returns a list of supported providers' which is a clear verb+resource combination. However, it doesn't distinguish this tool from its sibling 'ai__withdraw_to_ott_query' or 'ai__withdraw_to_ott_quote' which might have overlapping functionality. The purpose is understandable but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'select one [provider_id] and use with /withdraw/to_ott_quote', indicating this tool is a prerequisite step for another specific tool. It doesn't mention when NOT to use it or alternatives, but the context is clear enough for proper usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__withdraw_to_bankBInspect
Withdraws to a bank account, anywhere world-wide. Confirm (yes/no) before executing
Withdraws to a bank account, anywhere world-wide. The account currency is the currency in which funds will be paid out. The recipient bank account (rba) must already be set. See /wallet/rba @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which to withdraw the funds. @param rba_fk: The rba_fk (recipient bank account id) to pay out to, can be found /wallet/rba_list. @param amount: The amount to withdraw @param note: The note on the transaction
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| note | No | ||
| token | Yes | ||
| amount | Yes | ||
| rba_fk | Yes | ||
| api_key | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses key behavioral traits: the need for confirmation, global reach, currency payout details, and rba prerequisites. However, it lacks information on permissions, rate limits, error handling, or transaction finality. For a financial withdrawal tool with no annotations, this is a moderate disclosure but misses critical safety and operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but has structural issues: it repeats the purpose in the first two sentences, and the parameter explanations are embedded in a block that mixes guidance with param details. It is front-loaded with the core action, but the repetition and lack of clear separation between usage notes and parameter semantics reduce efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial withdrawal), empty annotations, 0% schema coverage, but presence of an output schema, the description is partially complete. It covers purpose, some usage rules, and parameter meanings, but lacks behavioral details like idempotency, side effects, or error cases. The output schema handles return values, but the description does not fully address the operational context needed for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantic context for all 6 parameters: 'api_key' and 'token' are described in terms of allocation and source, 'account_fk' and 'rba_fk' specify fund source and recipient ID with references, 'amount' is straightforward, and 'note' explains its purpose. This adds significant value beyond the bare schema, though some details like data formats are still implicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Withdraws to a bank account, anywhere world-wide.' It specifies the verb ('withdraws') and resource ('bank account'), and distinguishes it from sibling tools like 'ai__withdraw' (which might be generic) and 'ai__withdraw_to_ott' (for OTT withdrawals). However, it repeats the purpose in the first two sentences, slightly reducing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: it mentions the need to confirm before executing, that the recipient bank account must already be set (referencing '/wallet/rba'), and implies global scope. However, it does not explicitly state when to use this tool versus alternatives like 'ai__withdraw' or 'ai__withdraw_to_ott', nor does it specify prerequisites beyond the rba setup. This leaves gaps in guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__withdraw_to_ottAInspect
Withdraws ZAR to any the OTT Mobile supported providers, confirm (yes/no) before executing
Step 3. Performs the withdrawal based on the quote_id returned from /withdraw/to_ott_quote. The final ZAR amount may differ from the quoted amount. On successfully submission a payment_id will be returned @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which the funds will be withdrawn @param quote_id: The quote_id returned from withdraw/to_ott_quote @param mobile: The recipients South African mobile number, format 27XXXXXXXXX
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| mobile | No | ||
| api_key | Yes | ||
| quote_id | Yes | ||
| account_fk | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behaviors: requires confirmation before executing, final amount may differ from quoted amount, returns a payment_id on success. However, it doesn't mention error conditions, rate limits, authentication requirements beyond parameters, or what happens on failure. For a financial transaction tool with zero annotation coverage, this is moderately informative but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise but has structural issues. The first sentence is clear, but 'confirm (yes/no) before executing' is awkwardly placed. The step numbering ('Step 3.') seems out of context. The parameter documentation uses '@param' format which is redundant with the schema. However, all sentences add value, and it's not excessively verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a financial transaction tool with 5 parameters, 0% schema description coverage, no annotations, but with output schema (true), the description does a good job. It explains the tool's purpose, usage sequence, key behavioral aspects (confirmation, amount variance, success response), and parameter meanings. The output schema handles return values, so the description doesn't need to detail the JSON structure. For its complexity level, it's fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides semantic context for all 5 parameters: api_key ('allocated to your application'), token ('provided by /access/login'), account_fk ('from which the funds will be withdrawn'), quote_id ('returned from withdraw/to_ott_quote'), mobile ('recipients South African mobile number, format 27XXXXXXXXX'). This adds significant value beyond the bare schema, though it doesn't explain data types or constraints in detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Withdraws ZAR to any the OTT Mobile supported providers' (verb+resource+currency+target). It distinguishes from siblings like 'ai__withdraw_to_bank' by specifying OTT Mobile providers. However, the phrasing 'any the OTT Mobile' is slightly awkward, and it doesn't explicitly differentiate from 'ai__withdraw_to_ott_query' or 'ai__withdraw_to_ott_quote' beyond mentioning quote_id dependency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Step 3. Performs the withdrawal based on the quote_id returned from /withdraw/to_ott_quote.' This explicitly states when to use this tool (after obtaining a quote) and references a specific sibling tool. However, it doesn't mention when NOT to use it or alternatives for different withdrawal methods beyond the quote dependency.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__withdraw_to_ott_queryAInspect
Queries the state of an OTT Mobile withdrawal
Step 4. Optional. Queries the state of an OTT Mobile withdrawal using the payment_id from withdraw/to_ott @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which the funds were withdrawn @param payment_id: The payment_id returned from withdraw/to_ott
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| payment_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes the tool as a query (read-only) and mentions it's optional, which helps understand its non-destructive nature. However, it doesn't disclose important behavioral traits like authentication requirements (beyond parameters), rate limits, error conditions, or what specific state information is returned. The description adds basic context but lacks comprehensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two main sentences: one stating the purpose and context, another listing parameters. The parameter documentation is clear but could be more front-loaded with the core purpose. No wasted sentences, though the '@return' note is redundant given the output schema exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage, the description does well to document all parameters. With an output schema present, it doesn't need to explain return values. However, as a financial query tool with no annotations, it could better explain authentication flows, error handling, or what 'state' information includes. The description is reasonably complete for its complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 4 parameters, the description compensates well by documenting all parameters with clear explanations: api_key ('allocated to your application'), token ('wallet_api_token provided by /access/login'), account_fk ('from which the funds were withdrawn'), and payment_id ('returned from withdraw/to_ott'). This adds significant meaning beyond the bare schema, though it doesn't provide format examples or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Queries the state of an OTT Mobile withdrawal' with specific resource (OTT Mobile withdrawal) and verb (queries). It distinguishes from siblings like 'ai__withdraw_to_ott' by indicating it's a follow-up query using payment_id from that operation. However, it doesn't explicitly differentiate from other withdrawal-related tools like 'ai__withdraw_to_bank' or 'ai__withdraw_to_ott_quote' beyond the OTT Mobile context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Step 4. Optional' and 'using the payment_id from withdraw/to_ott', indicating it's a follow-up to a specific sibling tool. It doesn't explicitly state when NOT to use it or name alternative tools for similar queries, but the context is sufficiently clear for an agent to understand its sequential role in a withdrawal workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ai__withdraw_to_ott_quoteAInspect
Generates a withdrawal quote to a OTT Mobile supported provider
Step2. Generates a withdrawal quote to a OTT Mobile supported provider On successfully submission a quote_id will be returned, use with /withdraw/to_ott @param api_key: The api key allocated to your application @param token: The wallet_api_token provided by /access/login @param account_fk: The account_fk from which the funds will be withdrawn @param provider_id: The provider_id returned from withdraw/ott_providers_list @param amount: The amount to be withdrawn from the account_fk, in its native currency.
@return: a json object| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | ||
| amount | Yes | ||
| api_key | Yes | ||
| account_fk | Yes | ||
| provider_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses that the tool generates a quote (not an actual withdrawal), returns a quote_id on success, and requires subsequent use with another tool. However, it lacks details on error conditions, rate limits, authentication needs beyond parameters, or what happens if the quote expires. It adds some behavioral context but misses key operational traits for a financial tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main purpose, but it repeats 'Step2.' unnecessarily and includes param annotations in a non-standard format (@param). It's moderately concise but could be streamlined by removing the redundant step label and integrating param details more smoothly. The structure is functional but not optimal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (financial withdrawal tool), no annotations, 0% schema coverage, but with an output schema (implied by '@return: a json object'), the description does a good job. It explains the purpose, parameters, and next steps. However, it lacks details on output structure (though the output schema may cover this), error handling, and security considerations, leaving some gaps for a tool of this nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides clear semantics for all 5 parameters: api_key (allocated to your application), token (from /access/login), account_fk (source of funds), provider_id (from withdraw/ott_providers_list), and amount (in native currency). This adds significant value beyond the bare schema, though it could include format examples or constraints (e.g., amount minimums).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'generates a withdrawal quote to a OTT Mobile supported provider', which is a specific verb+resource combination. It distinguishes from siblings like 'ai__withdraw_to_ott' (which likely executes the withdrawal) and 'ai__withdraw_ott_providers_list' (which lists providers), though it doesn't explicitly name these alternatives. The purpose is well-defined but could be slightly more explicit about sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: it's a step in a withdrawal process (Step2), and it mentions that the returned quote_id should be used with '/withdraw/to_ott'. This implies when to use this tool (to get a quote before executing a withdrawal) and hints at an alternative (the execution tool), though it doesn't explicitly state when not to use it or name all relevant siblings. The guidance is practical but could be more comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!