Skip to main content
Glama

Server Details

Burn-to-create looksmaxxing engine on MegaETH. 14 MCP tools for DeFi, NFTs, and burns.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
megachadxyz/mega-chad
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

20 tools
chat_with_megachadChat with MegaChadBInspect

Send a plain English message and get back structured, actionable responses. Supports intents: price, stats, wallet, looksmaxx, swap, gallery, leaderboard, bridge, gasless, referral, about.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletNoWallet address for context (0x...). Optional.
messageYesNatural language query (e.g. "What is the current MEGACHAD price?")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden. It discloses that responses are 'structured' and lists functional intents, but omits critical behavioral traits: whether the chat maintains conversation state/session history, required authentication, rate limits, or how it handles ambiguous queries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no filler. It front-loads the core purpose before listing intents. The intent enumeration is efficient but could be improved with categorization or a clearer link to the sibling tools that fulfill similar functions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, and the tool's position as a generalist among many specialists, the description should clarify the tool selection strategy. The intent list provides functional coverage, but the absence of output format details or statefulness information leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both parameters. The description reinforces the 'plain English' nature of the message parameter but adds no additional semantic detail (e.g., prompt engineering tips, expected length constraints) beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action (send a plain English message) and return type (structured, actionable responses). It also enumerates supported intents. However, it does not effectively distinguish this generalist chat tool from specific siblings like `get_price` or `get_wallet_info`, leaving ambiguity about which tool to select.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description lists 11 supported intents (price, stats, wallet, etc.), it provides zero guidance on when to use this conversational interface versus the dedicated, single-purpose sibling tools (e.g., `get_price`, `get_swap_quote`). The agent cannot determine if this is a fallback tool or primary interface.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cross_chain_looksmaxxCross-Chain LooksmaxxAInspect

Build a cross-chain looksmaxx plan from any supported chain (Ethereum, Base, Arbitrum, Optimism, Polygon, BNB, Avalanche, Scroll, zkSync, Linea) to MegaETH. Returns step-by-step execution plan with bridge URLs, swap calldata, and burn instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountNoETH amount to bridge (e.g. "0.15")
walletNoWallet address for pre-built calldata (0x...)
referrerNoReferrer agent address for 5% commission (0x...)
sourceChainYesSource chain name (e.g. "base", "arbitrum", "ethereum")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses the return structure (step-by-step plan with bridge URLs, calldata, and burn instructions). However, it fails to clarify the safety profile—specifically whether this is a read-only planner or if it triggers actual transactions, which is critical context for a tool mentioning 'burn instructions' and 'swap calldata'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every element earns its place in a single dense sentence: the action (Build), scope (cross-chain from [9 chains] to MegaETH), and return value composition (plan with URLs, calldata, instructions). Zero waste, perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter bridging tool with complete schema coverage but no output schema, the description adequately compensates by detailing the returned execution plan components. It appropriately omits authentication details (likely wallet-based) but could benefit from mentioning rate limits or execution risks given the financial nature of the operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with parameters fully documented (amount format example, wallet purpose, referrer commission detail). The description meets the baseline score of 3 since the schema adequately defines semantics without requiring additional elaboration in the description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool builds a cross-chain 'looksmaxx' plan from specific listed chains to MegaETH, using a specific verb and resource. However, it assumes domain knowledge of what 'looksmaxx' means without contrasting explicitly against the sibling tool 'get_looksmaxx_plan' to clarify when to choose this cross-chain variant.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the enumerated source chains (use when bridging from Ethereum, Base, etc.), but provides no explicit guidance on when to prefer this over 'get_looksmaxx_plan' or 'get_bridge_info', nor any prerequisites or conditions for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gasless_burn_infoGet Gasless Burn InfoAInspect

Get EIP-712 typed data for a gasless burn. Returns the signature payload a wallet must sign, plus approval status. After signing, POST the signature to /api/gasless/burn to relay the burn without paying gas.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesWallet address (0x...)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Effectively discloses returns ('signature payload', 'approval status') and critical next-step behavior (must POST to /api/gasless/burn to complete). Could specify what asset gets burned or required approvals.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 states purpose, sentence 2 details returns, sentence 3 gives essential next-step instruction. Front-loaded with specific technical terminology (EIP-712, gasless burn).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong completeness given no output schema: explains both immediate returns (payload, status) and downstream workflow (POST endpoint). Missing only specific return structure details or confirmation of what exactly gets burned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the description appropriately focuses on contextualizing the single parameter within the EIP-712 typed data flow rather than repeating schema definitions. Mentioning 'wallet must sign' clarifies the address parameter's role as the signer.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific technical language ('Get EIP-712 typed data for a gasless burn') that precisely identifies the operation and distinguishes it from sibling tools like get_swap_quote or get_bridge_info. The verb-resource combination is exact.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes the multi-step workflow clearly ('Returns the signature payload... After signing, POST... to relay'), which implicitly guides usage. Lacks explicit 'when-not-to-use' alternatives (e.g., vs regular burn), but provides strong procedural context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_infoGet MegaChad Agent InfoBInspect

Returns ERC-8004 registration metadata, on-chain agent identity (ID, owner, wallet), reputation client count, and contract addresses.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. While it lists return fields, it omits behavioral traits such as whether this performs a live blockchain query versus cached data, authentication requirements, error handling for unregistered agents, or confirmation that this is read-only. It describes data but not execution semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence with zero wasted words. It front-loads the core action ('Returns') and immediately specifies the domain-specific data types, efficiently communicating value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description compensates for the missing output schema by listing returned data categories, it lacks structural detail (e.g., nested vs flat object, data types) that would aid an agent in consuming the response. Given the absence of annotations and output schema, more detail on the return shape or error conditions would be necessary for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per the evaluation guidelines, zero parameters establishes a baseline score of 4, as there are no parameter semantics to elaborate upon beyond the schema itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves specific ERC-8004 registration metadata and on-chain agent identity components (ID, owner, wallet), reputation metrics, and contract addresses. The specificity of 'ERC-8004' and the enumeration of fields naturally distinguishes it from sibling tools like get_identity or get_wallet_info, though it lacks explicit comparative language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description enumerates return data but provides no guidance on when to invoke this tool versus alternatives like get_identity or get_wallet_info. There are no stated prerequisites, conditions for use, or explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bridge_infoGet Bridge InfoAInspect

Returns bridge information for moving assets to MegaETH from Ethereum, Arbitrum, Base, and other chains. Lists canonical and aggregator bridges.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses content types ('canonical and aggregator bridges') which adds behavioral context, but lacks operational details like data freshness, caching behavior, or authentication requirements expected for a third-party bridge data API.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exactly two sentences with zero redundancy. First sentence establishes purpose and domain scope (MegaETH, source chains); second sentence clarifies content categories (canonical vs aggregator). Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a zero-parameter information tool. Without output schema, the description adequately signals return content (bridge listings by type). Could be improved by noting response format or data freshness, but sufficient for agent selection given the limited complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, triggering baseline score of 4. The description appropriately does not invent parameters, and the empty schema is correctly interpreted as a simple information retrieval operation requiring no filtering inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Returns' with clear resource 'bridge information' and explicit scope 'for moving assets to MegaETH from Ethereum, Arbitrum, Base, and other chains'. It distinguishes from siblings like get_swap_quote by focusing specifically on bridging infrastructure rather than swapping.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance by specifying the exact use case (moving assets to MegaETH from specific L1/L2 chains), which implies when to invoke it. However, lacks explicit 'when not to use' guidance or named alternatives like cross_chain_looksmaxx.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_chadboardGet Chadboard LeaderboardAInspect

Returns the burner leaderboard ranked by total burns. Includes ERC-8004 reputation scores, .mega domain names, profile info, and all looksmaxxed images per burner.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure. It compensates partially by detailing the returned data structure (ERC-8004 scores, domain names, profile info, images), but lacks operational details like pagination behavior, caching policies, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. The first sentence front-loads the core action and resource, while the second enumerates included data fields. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description appropriately compensates by listing the components of the leaderboard return value (scores, domains, images). For a parameterless read operation, this provides sufficient context for invocation, though pagination or response size limits remain undocumented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which according to the rubric establishes a baseline score of 4. The description does not need to compensate for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Returns') and clearly identifies the resource as a 'burner leaderboard ranked by total burns.' While it effectively implies distinction from siblings like get_portfolio or get_gallery through domain-specific terminology (ERC-8004, .mega domains, looksmaxxed), it does not explicitly name alternatives or differentiating use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like get_megachad_stats or get_gallery. There are no 'when-not-to-use' exclusions, prerequisites, or contextual triggers mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_identityGet MegaChad IdentityAInspect

Resolve a wallet address or .mega name into a unified MegaETH identity profile. Returns: MegaNames data, token balances, burn history & rank, reputation score, referral stats, tier level, and social links. Works as the social identity layer for MegaETH.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressOrNameYesWallet address (0x...) or .mega name (e.g. "chad.mega" or "chad")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description compensates by comprehensively listing return fields (MegaNames, balances, burn history, reputation, etc.) revealing the full data shape. Would be 5 if it explicitly stated idempotency or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose declaration, return value enumeration, and architectural positioning. Front-loaded with the core function and concrete examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations or output schema, the description adequately compensates by detailing return fields. Complete for a single-parameter lookup tool, though an output schema would eliminate the need for the returns list in the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear description. Tool description adds semantic context by framing the parameter as something to be 'resolved' into a profile, reinforcing the lookup/translation nature of the operation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Resolve' with clear resource 'MegaETH identity profile'. Distinguishes from siblings like get_wallet_info and get_portfolio by positioning as the 'unified' social identity layer rather than financial or basic wallet data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides contextual positioning as the 'social identity layer' which implicitly differentiates it from financial or stats-focused siblings, but lacks explicit when-to-use/when-not-to-use guidance or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_looksmaxx_planGet Looksmaxx PlanAInspect

Returns a complete, ordered set of transaction instructions for the full looksmaxx flow: swap → burn → tren fund → submit. Each step includes pre-built calldata ready to sign.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletYesWallet address (0x...)
ethAmountNoETH to swap (e.g. "0.5"). Omit if wallet already has enough $MEGACHAD.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries the full burden. It disclose that outputs are 'pre-built calldata ready to sign', implying this is a read-only planning tool that returns unsigned transactions rather than submitting them, and reveals the 4-step internal flow. Could clarify if it performs any server-side state changes or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb. Every phrase adds value: 'complete ordered set', specific step sequence, and 'pre-built calldata ready to sign'. Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by describing the return structure (ordered steps, calldata format). Brief mention of what 'looksmaxx' prerequisites are (e.g., requiring MEGACHAD balance) would elevate this to 5, though ethAmount description partially covers this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (wallet address format, ethAmount usage), meeting the baseline. Description mentions 'swap' which contextualizes ethAmount, but does not add syntax, validation rules, or format details beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb ('Returns') and resource ('transaction instructions'), clearly distinguishing from siblings like get_swap_quote (single-step pricing) by specifying this is the 'full looksmaxx flow' with explicit ordered steps (swap → burn → tren fund → submit).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes the scope (full flow vs partial) through the step sequence, but lacks explicit when-to-use guidance vs alternatives (e.g., when to use get_looksmaxx_requirements first) or prerequisites for the wallet parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_looksmaxx_requirementsGet Looksmaxx RequirementsBInspect

Returns the x402 payment requirements and $MEGACHAD burn requirements for looksmaxxing. Includes step-by-step instructions, contract addresses, and amounts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adequately discloses return contents (step-by-step instructions, contract addresses, amounts) but fails to state whether this is a safe read-only operation or if it triggers any on-chain checks, which is relevant context for a financial/payment-adjacent tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. First states core function; second enumerates return payload. Appropriately sized for a zero-parameter lookup tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate given zero parameters, but incomplete for the domain. Lacks output schema, and description compensates partially by listing return contents. However, given this involves payment requirements and token burning, missing safety/auth context prevents a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero input parameters (baseline 4). Description adds value by defining domain-specific terms (x402, $MEGACHAD, looksmaxzing) that help the agent understand the tool's ecosystem context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Returns') and specific resources (x402 payment requirements, $MEGACHAD burn requirements). However, it fails to distinguish from sibling tool 'get_looksmaxx_plan', leaving ambiguity about whether this is preparatory lookup vs. active planning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to invoke vs alternatives. Given sibling 'cross_chain_looksmaxx' exists, guidance on whether this applies to single-chain or all chains is absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_megachad_statsGet $MEGACHAD Token StatsAInspect

Returns current $MEGACHAD token statistics: total supply, circulating supply, tokens burned, and total burn count.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. 'Returns' implies read-only operation, but description omits auth requirements (public API vs authenticated), rate limits, data freshness/caching behavior, or error conditions. Minimal viable disclosure for a public statistics endpoint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with colon-delimited list of return fields. Front-loaded with action verb, zero redundancy, every token earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists (has_output_schema: false), so description appropriately compensates by enumerating all four return fields (supply, circulating, burned, count). Adequate for a zero-parameter read operation despite lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, which per guidelines establishes a baseline of 4. No parameter documentation needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Returns' with clear resource '$MEGACHAD token statistics', and enumerates exact metrics (supply, circulating, burned, burn count). This clearly distinguishes from siblings like get_price (pricing), get_portfolio (holdings), and get_referral_stats (referral metrics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains no explicit when-to-use guidance, prerequisites, or pointer to alternatives (e.g., does not contrast with get_price for market data vs supply data). User must infer usage from the function name and described return values.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_megaeth_protocolsGet MegaETH Protocol DirectoryAInspect

Returns a curated directory of protocols on MegaETH: DEXes, bridges, payment infra, storage, identity systems. Includes contract addresses, features, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable context about the content ('curated', specific fields like contract addresses and links) but omits operational details like pagination behavior, response size limits, rate limiting, or whether the data is cached or real-time.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste. It immediately states the action and resource, uses a colon to efficiently list categories, and front-loads the most critical information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the zero-parameter complexity and lack of output schema, the description appropriately compensates by detailing what fields appear in the return value (contract addresses, features, links). It could be improved by noting if the directory is paginated or frequently updated, but covers the essentials.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero input parameters, which according to the scoring guidelines establishes a baseline score of 4. There are no parameters requiring semantic clarification beyond what the empty schema indicates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Returns' and clearly identifies the resource as a 'curated directory of protocols on MegaETH'. It elaborates the scope by listing protocol categories (DEXes, bridges, payment infra, etc.), which implicitly distinguishes this broad directory from sibling tools like get_bridge_info or get_swap_quote that likely handle specific operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings. It fails to clarify whether this should be called for protocol discovery versus using get_bridge_info for specific bridge details, or whether it requires any prerequisites like a connected wallet.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_nft_metadataGet NFT MetadataAInspect

Returns ERC-721 metadata for a looksmaxxed NFT including image URL (IPFS or Warren on-chain), attributes, and storage properties.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesNFT token ID (numeric string)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable context about data sources ('IPFS or Warren on-chain') and identifies the ERC-721 standard explicitly. However, lacks disclosure on error handling (invalid tokenId), caching behavior, rate limits, or whether this requires authentication despite being a getter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with high information density. Front-loaded with action verb ('Returns'), immediately follows with scope (ERC-721, looksmaxxed NFT), and efficiently parenthesizes storage variants (IPFS/Warren). No redundant words or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Absence of output schema is partially compensated by listing specific return fields (image URL, attributes, storage properties). For a single-parameter read operation without destructive side effects, the description provides adequate completeness, though error scenarios could be mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with 'tokenId' already described as 'NFT token ID (numeric string)'. Description focuses entirely on return values rather than elaborating on the parameter (e.g., format constraints, valid ranges, where to find token IDs). With full schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Returns ERC-721 metadata' identifies the exact token standard and protocol, 'looksmaxxed NFT' distinguishes from generic NFT tools (distinguishing from siblings like get_portfolio or get_gallery), and enumerates specific return components (image URL with storage locations, attributes, storage properties).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this versus sibling tools like get_gallery (which likely displays NFTs visually) or get_portfolio (which may include NFT holdings). No preconditions, alternatives, or 'when-not-to-use' advice included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_portfolioGet MegaETH PortfolioAInspect

Get all MegaETH token balances for a wallet: ETH, WETH, $MEGACHAD, USDm. Returns formatted balances with raw values.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesWallet address (0x...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds return format disclosure ('formatted balances with raw values') and scope constraints (specific 4-token list). However, omits explicit read-only/safety characteristics and rate limiting details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences. No redundancy. Front-loaded with action and scope. Specific token list and return format efficiently communicated without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a simple single-parameter read tool. Compensates for missing output schema by describing return values ('formatted balances with raw values'). Sufficient given 100% param coverage and clear scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (address param fully described). Description provides minimal additional semantic value beyond schema ('for a wallet' aligns with param description). Baseline 3 appropriate per guidelines for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + specific resource 'MegaETH token balances' + explicit scope (ETH, WETH, $MEGACHAD, USDm). Distinguishes from sibling get_price (price data) and get_wallet_info (general wallet metadata) by specifying portfolio/balance retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through specificity (lists tokens covered) but provides no explicit when-to-use guidance or comparison to alternatives like get_price or get_wallet_info.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_priceGet $MEGACHAD PriceAInspect

Returns the current $MEGACHAD price in ETH from Kumbaya DEX, plus the estimated ETH cost to burn 225,000 tokens.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully identifies the data source (Kumbaya DEX) and characterizes the burn cost as 'estimated' (indicating calculation behavior). It does not disclose rate limits, caching behavior, or error handling, but covers the essential behavioral traits for a simple getter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, information-dense sentence that front-loads the primary function (returning price) and appends the secondary data point (burn cost). Every element—including the specific DEX name, token ticker, and exact burn quantity (225,000)—serves a communicative purpose without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read operation with no output schema, the description is complete. It specifies what data is returned (price in ETH and estimated burn cost), eliminating ambiguity about return values without requiring explicit output schema documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation rules, 0-parameter tools receive a baseline score of 4. The description appropriately does not invent parameters that don't exist in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Returns') and identifies the exact resource ($MEGACHAD price from Kumbaya DEX). It distinguishes itself from sibling get_swap_quote (which would be for trading execution) by focusing on current price data and a specific burn cost calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the mention of burn costs, suggesting it's useful when evaluating token burns. However, it lacks explicit guidance on when to use this versus get_swap_quote or gasless_burn_info, and doesn't state prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_referral_statsGet Referral StatsBInspect

Get referral statistics for a registered agent — total referrals, earnings, reward per burn, and calldata for referred burns.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesAgent wallet address (0x...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the specific data returned (the four statistical fields), which aids in selection. However, it lacks disclosure of error behaviors (e.g., unregistered agent), side effects, or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence front-loaded with action and resource. The em-dash appendage listing return values earns its place by providing necessary behavioral context. Zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple single-parameter read operation with no output schema, the description adequately compensates by enumerating the specific statistics returned. It meets the needs for tool selection despite lacking explicit error documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('Agent wallet address (0x...)'), establishing a clear baseline of 3. The description implies the parameter through 'for a registered agent' but does not add semantic details beyond the schema (e.g., validation rules, format constraints).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' with specific resource 'referral statistics' and concrete data fields listed (total referrals, earnings, reward per burn, calldata). However, it does not explicitly distinguish from sibling tool 'register_referral_agent' or clarify when to use this versus 'get_agent_info'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. While 'registered agent' implies a prerequisite state (registration), it does not state that one must call 'register_referral_agent' first or what happens if the address is unregistered.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_swap_quoteGet Swap QuoteAInspect

Get a swap quote for buying $MEGACHAD with ETH on Kumbaya DEX (MegaETH). Returns router address, calldata params, and slippage-adjusted minimum output. Omit ethAmount for general swap info and contract addresses.

ParametersJSON Schema
NameRequiredDescriptionDefault
ethAmountNoAmount of ETH to swap (e.g. "0.1"). Omit for general info.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description must carry full behavioral burden. It discloses return contents (router address, calldata, slippage-adjusted output) and conditional behavior, but omits safety-critical DeFi context: whether this is read-only, what slippage percentage is applied, and reversal risks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each earning its place: purpose, return values, and parameter behavior. The structure is front-loaded with the core action and specific context ($MEGACHAD, Kumbaya DEX). No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the DeFi complexity and lack of output schema/annotations, the description adequately covers the basic contract but leaves gaps: slippage calculationmethod is unspecified, safety warnings are absent, and the full structure of 'calldata params' is unexplained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single parameter, the baseline is 3. The description adds that omitting ethAmount returns 'contract addresses' specifically, which elaborates on the schema's 'general info,' but does not significantly enhance parameter semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description identifies the specific verb (Get), resource (swap quote), token ($MEGACHAD), input currency (ETH), and platform (Kumbaya DEX on MegaETH). It distinguishes from siblings like get_price by specifying this returns router address and calldata for execution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to omit the ethAmount parameter ('for general swap info and contract addresses'), but lacks explicit comparison to sibling tools (e.g., when to use get_price vs this tool) or prerequisites like wallet connection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_wallet_infoGet Wallet InfoBInspect

Check a wallet's ETH balance, $MEGACHAD balance, NFT count, burn eligibility, and early access status.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEthereum wallet address (0x...)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits. It lists return data but fails to indicate whether this is read-only, rate-limited, real-time or cached data, or what happens with invalid addresses.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no waste. Front-loaded with the verb 'Check' and efficiently lists the specific data points retrieved. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lists return fields in lieu of an output schema, which is helpful. However, given no annotations and no output schema, the description should provide more behavioral context (e.g., read-only nature, blockchain scope) to fully equip the agent for proper invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single 'address' parameter. The description implies the address is the wallet being checked but adds no syntax details, validation rules, or format examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies what data is retrieved (ETH balance, $MEGACHAD balance, NFT count, burn eligibility, early access status) using a specific verb 'Check'. However, it does not explicitly differentiate from sibling tool 'get_portfolio' which likely overlaps in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'get_portfolio' or 'get_nft_metadata'. No prerequisites, conditions, or explicit exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_early_accessRegister for Early AccessAInspect

Register a wallet for MegaChad testnet beta access. Checks on-chain eligibility ($MEGACHAD balance or looksmaxxed NFTs). Returns referral code and access status.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletYesEthereum wallet address (0x...)
twitterNoX/Twitter handle (optional for agents)
referralCodeNoReferral code from existing registrant
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the on-chain eligibility check mechanism and return values (referral code and access status). Missing explicit warnings about mutation side effects or idempotency, but 'Register' implies state change.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. First sentence establishes action and scope; second sentence covers validation logic and return values. Appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description adequately covers the return values (referral code, status) and eligibility prerequisites. Sufficient for a registration tool of this complexity, though gas costs or idempotency notes would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (wallet, twitter, referralCode), establishing baseline 3. Description adds value by contextualizing wallet eligibility requirements and clarifying that referralCode relates to returned referral codes, but does not add extensive syntax or format details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb (Register) + resource (wallet) + scope (MegaChad testnet beta access). Distinguishes from sibling register_referral_agent by specifying user 'wallet' registration for 'beta access' versus agent registration, though could explicitly name the sibling for absolute clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides eligibility criteria ($MEGACHAD balance or looksmaxxed NFTs) which implies prerequisites for usage, but lacks explicit when-to-use guidance contrasting with register_referral_agent or other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_referral_agentRegister as Referral AgentBInspect

Register a wallet as a MegaChad referring agent. Returns registration transaction calldata and a referral code. Referring agents earn 10% of the tren fund portion (11,250 $MEGACHAD) for every burn they refer.

ParametersJSON Schema
NameRequiredDescriptionDefault
walletYesAgent wallet address (0x...)
descriptionNoAgent description (optional)
mcpEndpointNoAgent MCP server URL (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical return values (transaction calldata and referral code) and financial incentive structure (10% earnings) necessary without output schema, but omits idempotency rules, error conditions, and explicit state-change warnings expected given no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences logically structured: action statement, return value disclosure, and economic incentive. No redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately compensates for missing output schema by describing return format and compensates for missing annotations by explaining the earning mechanism, though could clarify transaction execution expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema provides 100% description coverage; description adds no parameter-specific semantics beyond the schema but meets the baseline requirement for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Register' and specific resource 'MegaChad referring agent' effectively distinguishes this from sibling tools like register_early_access through domain-specific terminology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., register_early_access) or prerequisites such as checking existing registration status via get_agent_info first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.