synthetix-mcp
Server Details
Synthetix - 22 tools for data, metrics, and on-chain analytics
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/synthetix-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 22 of 22 tools scored. Lowest: 2.6/5.
Every tool has a clearly distinct purpose with no ambiguity. Read functions (e.g., balanceOf, decimals) are separate from write functions (e.g., approve, transfer) and events (e.g., Approval, Transfer), each targeting specific contract methods or events. Overlap is minimal, as tools correspond directly to unique contract functionalities.
Mostly consistent with minor deviations. The majority of tools use camelCase (e.g., acceptOwnership, balanceOf) matching common smart contract naming conventions. However, events like OwnerChanged and TargetUpdated use PascalCase, creating a slight inconsistency. Overall, the naming is readable and follows predictable patterns for the domain.
Borderline, as 22 tools feel heavy for a single contract interface but are reasonable given the comprehensive coverage of ERC-20-like functions, ownership management, and proxy features. It includes many read-only calls and events, which might be excessive for typical agent use but aligns with exposing full contract capabilities.
Complete coverage for the domain of a Synthetix-like contract, including ERC-20 functions (balanceOf, transfer, approve), ownership management (owner, nominateNewOwner), proxy features (target, setTarget), and events. No obvious gaps exist; agents can perform full CRUD/lifecycle operations and monitor state changes without dead ends.
Available Tools
22 toolsacceptOwnershipAInspect
Calls acceptOwnership(). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. Completes a two-step ownership transfer. Must be called by the pending owner address set via transferOwnership(). No return value.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and discloses: write operation ('Write function'), access control restrictions ('onlyOwner' example, pending owner requirement), and return value ('No return value'). Lacks explicit mention of irreversibility or side effects beyond ownership change.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with high information density. Front-loaded with critical behavioral info ('Write function', 'access control'). Minor tautology in 'Calls acceptOwnership()' but immediately followed by valuable context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter mutation tool with no output schema, the description comprehensively covers the two-step workflow, caller identity requirements, access control warnings, and return value behavior. Sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, which per guidelines establishes a baseline of 4. The description correctly provides no parameter details since none exist, and does not need to compensate for missing schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states it 'Completes a two-step ownership transfer' and distinguishes from sibling tools like nominateNewOwner by specifying this is the acceptance step (second step) versus the nomination/initiation step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisites: 'Must be called by the pending owner address set via transferOwnership()'. Also warns to 'Check contract source for modifier requirements before calling', indicating when the tool is valid to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
allowanceAInspect
Calls allowance(owner: string, spender: string). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns uint256.
| Name | Required | Description | Default |
|---|---|---|---|
| owner | Yes | address (Ethereum address, 0x-prefixed). | |
| spender | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and successfully communicates three key behavioral traits: the read-only nature ('does not modify contract state'), access permissions ('Unrestricted — any address can call'), and the return type ('Returns uint256'). It could be improved by noting the return value represents token amounts in base units.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is tightly structured with four efficient sentences: function signature, mutability guarantee, access restriction, and return type. There is no redundant filler; every clause conveys essential information about the contract interaction.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (standard ERC20 view function) and lack of annotations or output schema, the description adequately covers the critical aspects: parameters, state mutability, access control, and return type. A minor gap is the lack of explanation that the uint256 return value is measured in the token's smallest decimal unit (wei equivalent).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (both owner and spender are documented as Ethereum addresses), establishing a baseline of 3. The description echoes the parameter names and types ('owner: string, spender: string') but does not add semantic context beyond the schema, such as explaining that 'owner' is the token holder and 'spender' is the delegated address.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it 'Calls allowance' which is tautological (repeating the tool name), but specifies parameters and that it's read-only. However, it fails to explain what an 'allowance' semantically represents in the ERC20 context (the amount a spender is permitted to transfer on behalf of an owner), which is necessary to distinguish this from balanceOf or approve among the siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through 'Read-only — does not modify contract state,' indicating it's for querying rather than mutation. However, it lacks explicit when-to-use guidance versus siblings like 'approve' (which sets allowances) or 'transferFrom' (which consumes them), and doesn't mention prerequisites like having an allowance set before checking it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ApprovalAInspect
Event emitted by the contract. Indexed fields (filterable): owner, spender. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| owner | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| value | Yes | uint256 (uint256, pass as decimal string). | |
| spender | Yes | address (Ethereum address, 0x-prefixed) (indexed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It correctly identifies the tool as event-based and mentions indexing/filtering capabilities, but omits critical behavioral details: what triggers the event (allowance updates), whether this captures historical or only future events, and the emission semantics (one-time vs. continuous).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with two information-dense sentences. Front-loads the classification ('Event emitted by the contract'), follows with indexing details, and ends with usage mechanism. No redundant or wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple event subscription tool with no output schema, but missing contextual relationships. Given the sibling 'approve' function and other event tools, the description should explain that this event is emitted when token allowances change, providing the causal link missing from the current definition.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is established. The description adds value by explicitly noting that 'owner' and 'spender' are indexed/filterable fields, which signals query optimization capabilities beyond the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies this as a contract event (distinguishing from the sibling 'approve' function) and specifies the indexed fields. However, it doesn't explain what the event signifies (token allowance approval), which would help distinguish its semantic purpose from other events like 'Transfer' or 'OwnerChanged'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides the usage pattern 'Subscribe via log filters,' implying this is for monitoring rather than state modification. However, it lacks explicit guidance on when to use this event subscription versus calling the 'allowance' function to query state directly, or how it relates to the 'approve' function that emits it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
approveCInspect
Calls approve(spender: string, value: string). Unrestricted — any address can call this, but caller-specific logic may apply. Returns bool.
| Name | Required | Description | Default |
|---|---|---|---|
| value | Yes | uint256 (uint256, pass as decimal string). | |
| spender | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of behavioral disclosure and partially succeeds by noting the unrestricted access model and boolean return value. However, it omits critical state-change characteristics (that this modifies on-chain allowance mappings) and fails to mention emitted events or security implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently packs three distinct pieces of information—function signature, access control, and return type—into a brief 17-word statement. While the opening clause restates schema information that could be inferred, the overall density is appropriate with no wasted verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description compensates adequately by specifying the boolean return type, and the 100% schema parameter coverage reduces the need for elaboration. However, for an ERC-20 approval function with significant security implications, the description lacks essential context about allowance mechanics and the relationship to transferFrom.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides complete type documentation for both parameters ('uint256' and 'Ethereum address'), achieving 100% coverage without needing description supplementation. The description merely echoes these types in the function signature without adding usage context (e.g., that value represents token quantity in wei/smallest unit).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description primarily restates the function signature ('Calls approve(spender: string, value: string)') without explaining the semantic purpose of approving token spending allowances. While it mentions access restrictions ('Unrestricted'), it fails to distinguish from siblings like 'transfer' or clarify that this enables third-party spending via transferFrom.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'transfer' (direct sending) or 'allowance' (checking permissions). It mentions that 'any address can call this' but does not explain the prerequisite conditions or consequences of invoking the approval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
balanceOfAInspect
Calls balanceOf(owner: string). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns uint256.
| Name | Required | Description | Default |
|---|---|---|---|
| owner | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and succeeds well: it explicitly states the read-only nature (no state modification), notes unrestricted access permissions, and specifies the return type (uint256). This gives the agent clear expectations about safety and output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four structured sentences efficiently cover function signature, mutability behavior, permission constraints, and return type. Given the absence of annotations, every sentence earns its place. The front-loading of 'Read-only' is appropriate for immediate safety signaling.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation without output schema, the description adequately covers essential operational context: what it does, side effects (none), access control (none), and return format. It could be improved by explicitly mentioning 'ERC20 token balance' given the sibling context, but it is functionally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('address (Ethereum address, 0x-prefixed)'), so the baseline applies. The description mirrors the parameter name and type ('owner: string') but adds minimal semantic context beyond the schema (e.g., it doesn't clarify that this is the address whose balance will be returned).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a read operation that queries a balance (implied by 'balanceOf' and 'Returns uint256'), specifying the owner parameter type. However, it doesn't explicitly state 'token balance' to distinguish from the sibling 'allowance' tool, which also returns a uint256 for an address but serves a different purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit guidance by noting it is 'Read-only' and 'Unrestricted,' indicating when it is safe to use (no gas costs, no permission requirements). However, it lacks explicit comparison to siblings—particularly 'allowance'—and doesn't clarify when to use this versus other balance-related queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
decimalsBInspect
Calls decimals(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns uint8.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations exist, the description carries the full burden. It successfully discloses 'Read-only' and 'Unrestricted' access patterns, but fails to explain the semantic meaning of the returned uint8 value (token precision/decimals). It provides type safety info but not behavioral context of what the number represents.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with clear separation of concerns: function call, mutability, permissions, and return type. No wasted words, though 'Calls decimals()' is slightly redundant with the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read function with no output schema, the description provides the return type (uint8) but omits the critical semantic explanation that this represents token decimal precision. Given the domain specificity (ERC20 contract), this omission leaves a significant gap in the agent's understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage (empty object), the baseline applies. The description correctly doesn't invent parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The phrase 'Calls decimals()' is tautological and doesn't explain what the function returns semantically (the number of decimal places for token amounts). While 'Returns uint8' provides the type, it lacks the domain-specific meaning that would help an agent understand this is an ERC20 token precision getter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like 'name', 'symbol', or 'totalSupply'. The description mentions unrestricted access but doesn't contextualize this within the broader token metadata retrieval workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nameAInspect
Calls name(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns string.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosing behavioral traits, explicitly stating it 'does not modify contract state' and is callable by any address. It also discloses the return type ('Returns string'), providing essential output information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of four efficient sentences that front-load the function call, followed by behavioral constraints and return type. Every sentence provides distinct value regarding side effects, permissions, or output format with no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of this parameterless read function and absence of an output schema, the description adequately covers essential behavioral information (read-only, unrestricted, returns string). The context of sibling ERC20 functions implies this retrieves the token name, making it sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per evaluation guidelines. The description correctly omits parameter discussion since none exist, focusing instead on behavioral characteristics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Calls name()' and clarifies it 'Returns string', establishing it as a getter for the contract's name property. It effectively distinguishes itself from sibling mutation functions like 'transfer' or 'approve' by explicitly noting the read-only nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit access control guidance with 'Unrestricted — any address can call this read function', indicating no authentication prerequisites. It also clarifies the read-only nature, helping the agent select this tool when querying state versus modifying it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nominatedOwnerAInspect
Calls nominatedOwner(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns address.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Compensates well for absent annotations by explicitly stating 'Read-only — does not modify contract state' and 'Unrestricted' access. Also discloses return type ('Returns address') since no output schema exists. Could improve by noting behavior when no owner is nominated (e.g., returns zero address).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four short, efficient sentences with zero waste. Information is front-loaded with the function call pattern, followed by critical behavioral constraints (read-only, unrestricted), and ends with return type. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple getter with no parameters. The description covers the essential contract interaction pattern (read-only, unrestricted) and return type, compensating for the lack of output schema. Sufficient for an AI agent to invoke correctly, though domain context about ownership nomination would be a nice addition.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Baseline 4 for zero-parameter tool. No parameters to document, so nothing to add beyond the schema. Description correctly focuses on behavior and return value rather than inventing parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Starts with tautology 'Calls nominatedOwner()' but partially recovers by stating it returns an address and is read-only. However, it fails to explain what 'nominated owner' means in the context of the two-step ownership transfer pattern (pending owner awaiting acceptance), which would distinguish it from siblings like owner, nominateNewOwner, and acceptOwnership.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear auth context ('Unrestricted — any address can call'), but lacks workflow guidance. Does not explain when to use this vs owner (current owner) or how it relates to the ownership acceptance flow involving nominateNewOwner and acceptOwnership siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nominateNewOwnerAInspect
Calls nominateNewOwner(_owner: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| _owner | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description effectively carries the full burden: it explicitly declares this is a 'Write function,' warns about access control modifiers, and states 'No return value.' This gives the agent essential safety and behavioral context beyond what the schema provides.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficient sentences with logical flow: function signature, security warnings, and return value disclosure. No redundant information is present, though the first sentence could be more descriptive rather than just repeating the function call syntax.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a state-changing operation with no output schema, the description adequately covers the critical aspects: purpose, authorization requirements, and return behavior. While it could mention the relationship with acceptOwnership to complete the context, it provides sufficient information for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the _owner parameter (Ethereum address format). The description references '_owner: string' but does not add semantic meaning beyond the schema's definition. With full schema coverage, this meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a function that nominates a new owner ('Calls nominateNewOwner'), distinguishing it from siblings like acceptOwnership, owner, and nominatedOwner. However, the first clause is somewhat tautological (repeating the function name) and could more explicitly explain that this initiates a two-step ownership transfer process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides crucial prerequisite guidance about access control restrictions ('onlyOwner') and advises checking contract source requirements before calling. However, it lacks explicit workflow guidance regarding the sibling tool acceptOwnership, which is required to complete the ownership transfer after nomination.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ownerAInspect
Calls owner(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns address.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and successfully discloses: (1) read-only safety ('does not modify contract state'), (2) permissionless access ('any address can call'), and (3) return value type ('Returns address'). It appropriately compensates for missing structured metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four short, declarative sentences with zero filler. Information is front-loaded with the action, followed by behavioral constraints and return value. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple state getter with no output schema, the description adequately compensates by specifying the return type ('address'). Given the low complexity and absence of parameters, this level of disclosure is sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, warranting baseline score per rubric. The description correctly implies no inputs are required beyond the function call itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies this as a getter that returns the contract owner's address. While 'Calls owner()' is somewhat tautological, 'Returns address' specifies the output type. It does not explicitly distinguish from sibling 'nominatedOwner', but the core purpose is discernible.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides access constraints ('Unrestricted — any address can call') but lacks explicit guidance on when to use this versus 'nominatedOwner' or ownership transition tools like 'acceptOwnership'. The read-only nature is stated but not framed as a selection criterion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
OwnerChangedCInspect
Event emitted by the contract. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| newOwner | Yes | address (Ethereum address, 0x-prefixed). | |
| oldOwner | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden but omits critical behavioral context: it doesn't explain that this event is emitted after 'acceptOwnership' is called, that it represents an irreversible state change notification, or that it is read-only log data rather than a transaction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely terse at two sentences with no filler, but potentially too minimal given the lack of annotations and the complexity of the ownership lifecycle. The critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for an event tool surrounded by related state-changing functions. The description should explain the relationship to 'acceptOwnership' and 'OwnerNominated' to help the agent understand the ownership transfer lifecycle, which is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the parameter types and formats are fully documented in the schema. The description adds no parameter information, but at this coverage level the baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as a contract event and mentions the subscription mechanism, but fails to distinguish from the similar 'OwnerNominated' sibling event or clarify that this signals a completed ownership transfer (vs. nomination).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While 'Subscribe via log filters' hints at the mechanism, there is no guidance on when to monitor this event versus calling 'owner()' directly, nor does it explain the workflow (nomination → acceptance → OwnerChanged event).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
OwnerNominatedCInspect
Event emitted by the contract. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| newOwner | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It correctly identifies this as an event subscription (not a transaction), but misses critical behavioral context: whether this fires once or multiple times, that it precedes acceptOwnership, and that parameters act as log filters rather than mutation inputs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief at two sentences. However, 'Event emitted by the contract' wastes space stating the obvious (the tool name and context imply it's an event) without adding semantic value. 'Subscribe via log filters' is information-dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and lack of output schema, the mechanical description is minimally sufficient. However, given the rich sibling context (nominateNewOwner, acceptOwnership, OwnerChanged), the description should explain this event's place in the ownership transfer lifecycle—it currently provides isolated mechanics without workflow context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the 'newOwner' parameter is fully typed as an Ethereum address in the schema. The description adds no parameter context beyond the schema, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as an event and mentions subscription via log filters, but fails to explain what the event semantically represents (ownership nomination). 'Event emitted by the contract' is generic and doesn't leverage the tool name 'OwnerNominated' to clarify the specific contract lifecycle event being tracked.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it mentions 'Subscribe via log filters' as a mechanism, it provides no guidance on when to listen for this event (e.g., after calling nominateNewOwner) or how it differs from the OwnerChanged sibling event. No filtering guidance is provided despite the newOwner parameter implying filter capabilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
setTargetAInspect
Calls setTarget(_target: string). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| _target | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses: (1) write/mutation behavior ('Write function'), (2) access control risks ('may have access control restrictions'), and (3) return value behavior ('No return value'). It misses side effects like the TargetUpdated event implied by sibling tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with minimal waste. The first sentence is slightly redundant (repeats function signature), but subsequent sentences efficiently convey critical safety information (access control) and return value status. Front-loaded with the function signature.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter state-changing function with no output schema. The description compensates for missing output schema by explicitly stating 'No return value' and warns about access control. Could mention the TargetUpdated event (present in siblings) for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('address (Ethereum address, 0x-prefixed)'), establishing a baseline of 3. The description mentions '_target: string' but adds no semantic meaning beyond the schema (e.g., what this address represents in the contract logic).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Calls setTarget(_target: string)' which tautologically restates the tool name. While 'Write function' clarifies the operation type, it lacks specific semantics about what 'target' represents (e.g., implementation address, destination) and doesn't explicitly differentiate from the sibling 'target' getter function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly warns about access control restrictions (e.g., onlyOwner) and instructs to 'Check contract source for modifier requirements before calling.' This provides clear prerequisites for usage, though it doesn't explicitly contrast with alternatives like when to use versus the 'target' view function.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
setUseDELEGATECALLAInspect
Calls setUseDELEGATECALL(value: boolean). Write function — may have access control restrictions (e.g. onlyOwner). Check contract source for modifier requirements before calling. No return value.
| Name | Required | Description | Default |
|---|---|---|---|
| value | Yes | bool. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates that this is a write operation, warns about potential access control requirements, and explicitly states there is no return value—critical safety information for an administrative contract function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured across four sentences with minimal waste. It front-loads the function signature, immediately follows with the write classification and access control warnings, and concludes with the return value status. Each sentence conveys distinct, valuable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter administrative setter without an output schema, the description is sufficiently complete. It addresses the critical domain-specific concern of access control permissions, which is essential for smart contract administration tools, though it could briefly clarify the operational impact of the boolean setting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (the 'value' parameter is typed as 'bool'), establishing a baseline of 3. The description references the boolean parameter in the function signature but does not add semantic meaning regarding what true/false values actually enable or disable (e.g., enabling proxy delegatecall mode vs. direct calls).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a write function that sets the DELEGATECALL configuration, distinguishing it from the sibling getter 'useDELEGATECALL'. While the opening 'Calls setUseDELEGATECALL(value: boolean)' is slightly tautological, the 'Write function' label effectively establishes the tool's mutative purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides important prerequisite guidance by warning about access control restrictions (e.g., onlyOwner) and advising users to check contract source for modifiers. However, it lacks explicit guidance on when to use this tool versus sibling administrative functions like 'setTarget' or 'acceptOwnership'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
symbolAInspect
Calls symbol(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns string.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden effectively. It explicitly states the operation is read-only, unrestricted (no permission requirements), and returns a string type—critical safety and usage information that would otherwise be missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of four short, information-dense clauses with zero redundancy. Each sentence delivers distinct value: operation identity, state safety, permission model, and return type. Excellently front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates by specifying 'Returns string'. For a simple read function with no parameters, the description adequately covers operational semantics, though it could optionally specify the typical format (e.g., 3-4 character ticker).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline of 4. The description appropriately does not mention parameters, which is correct for a parameterless function call.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the tool as calling 'symbol()' and returning a string, which in context of the ERC20 sibling tools (name, decimals, totalSupply) clearly indicates it retrieves the token ticker symbol. However, it relies on domain knowledge rather than explicitly stating 'returns the token symbol identifier'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance by noting it is 'Read-only' and 'Unrestricted', helping distinguish it from state-modifying siblings like transfer or approve. However, it does not explicitly differentiate from similar metadata read functions like 'name' (full name vs ticker symbol).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
targetAInspect
Calls target(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns address.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the read-only nature ('does not modify contract state') and permission requirements ('any address can call'). It also specifies the return type ('Returns address'), compensating for the missing output schema. It does not explain what the target address represents (e.g., implementation contract address) or edge cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of four short, declarative sentences with no redundant phrases. Information is front-loaded with the function call, followed by behavioral traits and return value. The structure is efficient, though 'Calls target()' adds minimal value beyond the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple getter with no input parameters and no output schema, the description covers the essential behavioral traits (read-only, unrestricted) and return type. Given the low complexity and absence of structured output metadata, this level of description is sufficient, though specifying that the returned address is the implementation contract address would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero input parameters, which per guidelines establishes a baseline of 4. The schema is trivially complete (100% coverage of zero properties), requiring no additional parameter semantics from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies this as a read function that returns an address, distinguishing it from the sibling setter 'setTarget'. While 'Calls target()' is somewhat tautological, the clarification that it returns an address and does not modify state makes the purpose clear in the context of the proxy pattern evident from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes the function is 'Unrestricted — any address can call this read function', which provides useful access control context. However, it lacks explicit guidance on when to use this versus alternatives (e.g., checking the target address before interacting with the contract) or why one might need this value.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
TargetUpdatedCInspect
Event emitted by the contract. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| newTarget | Yes | address (Ethereum address, 0x-prefixed). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It correctly identifies this as a subscription-based event listener rather than a transaction, but omits critical behavioral details: trigger conditions (when setTarget is called), whether this supports historical filtering or only new blocks, and the event payload structure beyond the single parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two sentences with no redundant words. However, the brevity comes at the cost of completeness; the second sentence could have been expanded to include trigger conditions or proxy-specific context without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the proxy contract context evident from siblings (setTarget, useDELEGATECALL), the description is insufficient. It fails to explain that this event signals an implementation upgrade, what the newTarget parameter represents, or the relationship between this event and the setTarget function call that emits it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, documenting newTarget as an Ethereum address. The description adds no semantic information about parameters, meeting the baseline for high-coverage schemas but not compensating with context about what this address represents (the new implementation target).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies the tool as an event subscription mechanism ('Event emitted by the contract'), distinguishing it from state-changing functions like setTarget. However, it fails to specify what 'Target' refers to (likely the proxy implementation address) or what differentiates this event from siblings like Transfer or OwnerChanged.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides minimal mechanical guidance ('Subscribe via log filters') but offers no strategic guidance on when to subscribe to this event versus calling the target() function directly, or how this relates to the setTarget function that presumably triggers it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
totalSupplyAInspect
Calls totalSupply(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns uint256.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates read-only nature ('does not modify contract state'), permissionless access ('Unrestricted — any address can call'), and return type ('Returns uint256'), providing sufficient context for a state-reading contract call.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four short sentences with zero redundancy. Each sentence delivers distinct value: function identification, mutability guarantees, permission status, and return type. Front-loaded with the most critical behavioral constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameterless read function, the description is adequate. It compensates for the missing output schema by explicitly stating the return type (uint256). Given the standard ERC20 context implied by sibling tools, no additional complexity requires explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, which establishes a baseline score of 4. The description correctly omits parameter discussion since none exist, and the schema is trivially complete at 100% coverage with no required fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a read operation calling the totalSupply() contract function and specifies it returns uint256. While 'Calls totalSupply()' is slightly circular with the tool name, the context distinguishes it from sibling tools like balanceOf or transfer by identifying it as the supply query function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives. It does not indicate that this should be used when needing the total token supply versus account balances (balanceOf) or that it is commonly used alongside other ERC20 query functions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transferBInspect
Calls transfer(to: string, value: string). Unrestricted — any address can call this, but caller-specific logic may apply. Returns bool.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | address (Ethereum address, 0x-prefixed). | |
| value | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return type ('Returns bool') and access restrictions ('Unrestricted'), but omits critical behavioral details: that this is a state-mutating operation, potential failure modes (insufficient balance), or that it likely emits a Transfer event.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two sentences with no filler. However, the first sentence merely restates the function signature rather than describing purpose, slightly weakening the front-loading of critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the ERC20 context (evident from sibling tools like 'approve', 'balanceOf'), the description provides minimal viable information: it identifies the function signature, return value, and access pattern. However, without annotations, it lacks completeness regarding state changes, error handling, and sibling differentiation that would be necessary for robust agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description merely repeats the parameter types ('to: string, value: string') already documented in the schema without adding semantic context (e.g., that 'value' represents token amounts or 'to' is the recipient address).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the function as 'transfer' and mentions parameters, but it relies on the reader to infer this is an ERC20 token transfer. It fails to explicitly distinguish from the sibling 'transferFrom' tool or clarify that it transfers the caller's own tokens, though the 'unrestricted' hint provides partial differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'Unrestricted — any address can call this' guidance hints at usage (no approval needed), but it does not explicitly state when to use this versus 'transferFrom' or other alternatives. The 'caller-specific logic may apply' caveat is vague and doesn't provide concrete prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
TransferBInspect
Event emitted by the contract. Indexed fields (filterable): from, to. Subscribe via log filters.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| from | Yes | address (Ethereum address, 0x-prefixed) (indexed). | |
| value | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds value by identifying 'Indexed fields' and their filterable nature, but omits whether this establishes a persistent subscription, returns historical events, or the format/structure of returned log data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely compact at two sentences with zero redundancy. Key information (event type, indexed fields, subscription method) is front-loaded and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description adequately covers the basic nature of the tool but leaves gaps regarding return format, subscription lifecycle, and the semantic meaning of the Transfer event (token movement).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with detailed type descriptions. Description adds meaningful context by explicitly noting which fields are indexed and filterable (from, to), enhancing understanding of how to construct queries beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Identifies this as an 'Event emitted by the contract' and mentions subscription capability, but fails to explain what triggers this event (token transfers) or distinguish from the 'transfer' function sibling. The description explains what the event is rather than clearly stating the tool's action (subscribing/querying).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides 'Subscribe via log filters' as a usage hint, but lacks explicit guidance on when to use this event subscription versus calling the 'transfer' function to execute transfers. No mention of prerequisites like connection requirements or filter syntax.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transferFromCInspect
Calls transferFrom(from: string, to: string, value: string). Unrestricted — any address can call this, but caller-specific logic may apply. Returns bool.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | address (Ethereum address, 0x-prefixed). | |
| from | Yes | address (Ethereum address, 0x-prefixed). | |
| value | Yes | uint256 (uint256, pass as decimal string). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions 'Returns bool' which is useful since no output schema exists, but fails to disclose state-changing behavior (token balances change), the allowance check mechanism, or potential failure modes (revert if insufficient allowance).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact at three sentences. The first sentence is slightly redundant (repeating schema info), but the second sentence about unrestricted access and third about return value earn their place. Information is front-loaded reasonably well.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an ERC20 transferFrom function (evident from siblings like 'approve', 'allowance'), the description is incomplete. It omits the crucial allowance/approval requirement that defines this function's behavior compared to standard transfer, and provides no context about the boolean return value semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (addresses and uint256 types documented). The description lists the parameters but adds no semantic meaning beyond the schema (e.g., does not explain that 'from' is the token owner and 'to' is the recipient). Baseline 3 is appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies the function signature but is somewhat tautological ('Calls transferFrom'). While it implies token transfer through the function name and parameters, it does not explicitly state the semantic purpose (transferring tokens on behalf of an owner) nor distinguish from the sibling 'transfer' tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It mentions 'Unrestricted — any address can call this,' which provides some access context, but fails to explain the critical prerequisite that the caller must have an allowance/approval from the 'from' address. It does not explain when to use transferFrom versus the sibling 'transfer' tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
useDELEGATECALLAInspect
Calls useDELEGATECALL(). Read-only — does not modify contract state. Unrestricted — any address can call this read function. Returns bool.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the operation is read-only, unrestricted by permissions, and returns a boolean. It does not contradict any structured data, though it could further clarify what the boolean value signifies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at three sentences. Front-loads critical behavioral traits (read-only, unrestricted) and return type immediately. No redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (zero-input boolean getter) and lack of output schema, the description adequately covers the return type and access control. It would achieve a 5 if it explained the semantic meaning of the boolean (e.g., true indicates delegatecall is enabled).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters, establishing a baseline score of 4. The description does not need to compensate for schema gaps, and the 100% schema description coverage is vacuously satisfied.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it 'Calls useDELEGATECALL()' which borders on tautology, but clarifies it is 'Read-only' and 'Returns bool'. Distinguishes from sibling 'setUseDELEGATECALL' by noting it does not modify state, though it does not explain what the boolean represents (whether the contract uses delegatecall proxy pattern).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'Unrestricted — any address can call this read function' indicating no authentication prerequisites, but fails to specify when to use this getter versus the sibling setter 'setUseDELEGATECALL' or what business logic necessitates checking this flag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!