Skip to main content
Glama

Server Details

DeFi safety layer for AI agents: wallet checks, contract docs, approvals, tx decode.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
acarchidi/agentforge
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 16 of 16 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation5/5

All 16 tools have clearly distinct purposes, with descriptions that effectively differentiate overlapping domains like token intelligence, research, comparison, and risk metrics. Even similar-sounding tools like 'approval_scan' and 'wallet_safety' are explicitly scoped to different aspects of wallet security.

Naming Consistency3/5

Tool names use a mix of patterns: some follow verb_noun (e.g., 'approval_scan', 'pool_snapshot'), others are noun_noun (e.g., 'token_intel', 'contract_docs'), and a few are single-word verbs (e.g., 'sentiment', 'translate'). This inconsistency could confuse an agent expecting a uniform convention.

Tool Count5/5

With 16 tools, the server covers a comprehensive range of crypto/blockchain analysis tasks—from token research and contract security to wallet safety and general NLP—without being overly bulky. Each tool serves a necessary function, making the count well-scoped for its purpose.

Completeness4/5

The tool surface covers major crypto analysis areas (tokens, contracts, wallets, transactions, gas, pools) and includes general utilities (sentiment, summarize, translate). However, it lacks tools for actionable operations like token approval management or transaction simulation, leaving minor gaps.

Available Tools

16 tools
approval_scanAInspect

Scan a wallet for risky token approvals. On EVM chains: identifies unlimited ERC-20 approvals and unverified spenders. On Solana: scans SPL token delegate authorities. Returns risk assessment.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain network (includes Solana support)ethereum
addressYesWallet address to scan (0x-prefixed for EVM, base58 for Solana)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool is read-only ('scan'), specifies what kinds of risks it identifies (unlimited ERC-20 approvals, unverified spenders, SPL delegate authorities), and that it returns a risk assessment.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no extraneous information. The most important information (purpose and chain-specific details) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a scan tool without an output schema, the description adequately covers input requirements, supported chains, and type of assessment. It could mention output format (e.g., JSON structure) but is sufficient given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The tool description adds contextual meaning by explaining chain-specific behavior, but the parameter descriptions in the schema already cover the basics. The description complements but doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('scan') and resource ('wallet for risky token approvals'), and explicitly distinguishes between EVM and Solana behavior. This differentiates it from sibling tools like wallet_safety.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly states the tool's purpose and scope (scanning for risky approvals on specific chains). It does not explicitly mention when not to use it or compare to alternatives, but the context is sufficient for an agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

code_reviewBInspect

Smart contract security analysis. Finds vulnerabilities, suggests gas optimizations, flags best practice violations. Supports Solidity, Rust, Move, TypeScript.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesSmart contract source code
focusNoAnalysis focus areaall
languageNoProgramming languagesolidity
previousCodeNoPrevious version for diff review
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It lacks details on behavioral traits such as whether the code is stored, if requires authentication, or if there are rate limits. Only high-level outputs are mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no waste. Front-loads the main purpose and lists supported languages. Every sentence is informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks description of return values/output format, and does not clarify optional parameters like previousCode. For a tool with multiple parameters and no output schema, more detail is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds high-level context about the tool's purpose but does not explain individual parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs smart contract security analysis, listing specific outputs (vulnerabilities, gas optimizations, best practices) and supported languages. It distinguishes from sibling tools like approval_scan or contract_docs by its focus on code analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like approval_scan or contract_docs. The description implies usage for code review but does not specify contexts or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

contract_docsBInspect

Generate documentation for any verified EVM smart contract. Returns function descriptions, risk flags, interaction patterns, and security posture.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
addressYesContract address
focusFunctionsNoSpecific functions to document
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavior. It does not mention whether the tool makes external API calls, has rate limits, or requires authentication. The term 'verified' is used but not explained (e.g., what happens for unverified contracts).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence listing multiple output types is efficient but could be broken into clearer components. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes return values (function descriptions, risk flags, etc.) sufficiently in absence of output schema. However, it lacks information on error conditions (e.g., unverified contract) and edge cases. Missing behavior details for a medium-complexity tool with 3 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters. Description adds no additional parameter semantics beyond what the schema already provides. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates documentation for verified EVM smart contracts, listing specific outputs (function descriptions, risk flags, interaction patterns, security posture). This distinguishes it from sibling tools like code_review or token_intel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like code_review or summarize. The description does not mention prerequisites (e.g., contract must be verified) or provide usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

contract_monitorAInspect

Monitor recent contract activity for suspicious admin operations, proxy upgrades, ownership changes, and pause events.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
addressYesContract address
lookbackHoursNoHours to look back (max 168)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It states what the tool monitors but does not reveal output format, pagination, rate limits, or whether it is read-only. The description adds some context beyond the schema but lacks thorough behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded with the verb 'Monitor' and efficiently conveys the core functionality. There is no redundancy or unnecessary information, making it appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and missing annotations, the description covers the basic input context and monitoring scope. However, it does not inform the agent about the output format, potential limits, or how to interpret results. For a monitoring tool, this leaves moderate gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema description coverage is 100% (all parameters have descriptions), so the baseline is 3. The tool description does not add further meaning to the parameters; it only reiterates the tool's overall purpose. No additional value is provided for parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Monitor') and resource ('contract activity') and lists precise events (suspicious admin operations, proxy upgrades, ownership changes, pause events), making the tool's purpose very clear. It distinguishes from sibling tools like code_review or token_intel which focus on different aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is given. The purpose is evident, but the description does not mention alternatives or contexts where other tools might be preferred. The agent must infer usage from the name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gas_oracleAInspect

Current gas prices (slow/standard/fast) for any supported EVM chain with trend analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description only mentions output (gas prices, trend analysis) without disclosing behavioral traits like read-only nature, authentication needs, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no superfluous words, effectively front-loading the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one optional parameter and no output schema, the description covers the main functionality but lacks details on output format, data freshness, or what 'trend analysis' entails.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds value beyond the schema's 'Blockchain network' by specifying the output includes 'slow/standard/fast' prices and 'trend analysis', enriching the parameter's meaning despite 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'current gas prices (slow/standard/fast)' for 'any supported EVM chain' with 'trend analysis', which is specific and distinguishes it from siblings like 'token_compare' or 'code_review'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives, such as other chain-specific tools or why one might choose 'gas_oracle' over 'pool_snapshot'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pool_snapshotAInspect

Get a cached snapshot of top DeFi liquidity pools. Filter by protocol (e.g. "uniswap-v3"), chain (e.g. "ethereum"), or token symbol (e.g. "ETH"). Returns TVL, APY, 24h volume, IL risk, and registry enrichment. Data refreshed every 15 minutes.

ParametersJSON Schema
NameRequiredDescriptionDefault
poolNoFilter by specific pool address or DeFi Llama pool ID
chainNoFilter by chain, e.g. "ethereum", "base", "arbitrum"
limitNoMax results (1-100)
orderNoSort orderdesc
tokenNoFilter pools containing this token symbol, e.g. "ETH", "USDC"
offsetNoPagination offset
sortByNoSort fieldtvl
protocolNoFilter by protocol name, e.g. "uniswap-v3", "curve", "aave"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that data is cached and refreshed every 15 minutes, implying staleness. It describes return fields but does not explicitly state it is read-only or mention rate limits. Given the read-only nature, this is adequately transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first states main action, second lists filters, third lists returns and refresh info. Every sentence is necessary and no filler. Front-loaded with the primary purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and no annotations, the description covers purpose, filters, returns, and staleness. It does not explain pagination or sorting beyond the schema, but those are documented in the schema. A mention of rate limits or read-only would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with descriptions for all 8 parameters. Description adds value by providing example filters (protocol, chain, token) and listing return fields (TVL, APY, etc.) not in schema. This helps the agent understand what parameters to use and what to expect.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Get', the resource 'cached snapshot of top DeFi liquidity pools', and what it returns (TVL, APY, volume, IL risk). It distinguishes from sibling tools like token_intel or gas_oracle by focusing on liquidity pools with filtering options.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description lists filtering options and return data, providing context for when to use the tool (e.g., querying DeFi pool data). However, it lacks explicit guidance on when not to use it or alternatives among siblings, such as token_intel for token-specific data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

registry_lookupAInspect

Look up a contract address in the Known Contract Label Registry. Returns protocol name, category, risk level, and tags. Free — no payment required.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain network (e.g., ethereum, base)
addressYesContract address (0x-prefixed)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description states it returns protocol name, category, risk level, and tags, and emphasizes it is free. However, it does not disclose potential side effects (none expected), error handling, or data freshness. Without annotations, the description carries the full burden, and it meets minimum expectations for a read-only lookup.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loads the action and outputs, then adds the free information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup without output schema, the description covers purpose, inputs (implied), and outputs. It lacks details on optionality of chain parameter or error behavior, but the tool is straightforward and the context is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters. The description does not add additional context beyond the schema, which is sufficient. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('look up a contract address') and the resource ('Known Contract Label Registry'), with specific output fields. It distinguishes from sibling tools like token_intel or token_research by focusing on a registry of known contracts with risk labels.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The mention of 'Free — no payment required' slightly hints at a differentiator, but does not specify when a user should choose this over token_intel or other lookup tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sentimentAInspect

Analyze sentiment of text in crypto, finance, social media, or general context. Returns score (-1 to 1), confidence, label (very_bearish to very_bullish), reasoning, and per-entity sentiment.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to analyze for sentiment
contextNoContext for sentiment analysiscrypto
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes output fields (score, confidence, label, reasoning, per-entity sentiment) but does not disclose behavioral traits like whether the operation is read-only, synchronous, or has rate limits. Moderate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence that front-loads the main action and includes all key information. No unnecessary words; highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple parameters (2, one required) and no output schema, the description adequately covers purpose, inputs, and output structure. Could mention typical use cases or limits, but overall complete for a straightforward sentiment tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both parameters have descriptions). The description adds value by explaining the return format and enumerating contexts beyond the schema. Baseline 3, plus extra return info justifies a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes sentiment of text, specifying four contexts (crypto, finance, social media, general). This distinguishes it from all sibling tools, which are blockchain/crypto-related (e.g., gas_oracle, token_intel).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies use for sentiment analysis across listed contexts, it does not explicitly state when to use this tool versus alternatives or provide exclusions. No sibling tool performs sentiment analysis, so the context is clear but not fully guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

summarizeAInspect

Summarize text with configurable length (brief/standard/detailed), format (prose/bullet_points/structured), and optional topic focus. Returns summary, key points, and compression ratio.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to summarize
focusNoOptional topic to focus the summary on
formatNoOutput formatstructured
maxLengthNoSummary lengthstandard
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses return values (summary, key points, compression ratio) and configurable behavior. No annotations provided, but description covers behavioral traits sufficiently without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences that front-load the core purpose and capabilities. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return format (summary, key points, compression ratio). Covers most essential aspects for a text summarization tool with 4 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions; the description reiterates enum values and optionality but adds minimal new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's action (summarize text) and lists configurable options (length, format, focus). It distinguishes itself from siblings, which are unrelated tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use or not use this tool versus alternatives. Siblings are distinct, so implied usage, but lacks explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_compareAInspect

Compare a primary token against up to 3 others. Returns full research on primary, abbreviated metrics on comparisons, plus AI comparative analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
compareYesTokens to compare against (1-3)
primaryYesPrimary token to research
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must cover behavior. It discloses the output structure (full research on primary, abbreviated on comparisons, AI analysis) but omits details on authentication, rate limits, or potential errors. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the action and key constraints. Every word earns its place; no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters and no output schema, the description adequately explains the purpose and return structure. However, it lacks details on data sources, error handling, or what happens if tokens are not found. Still, it covers the core functionality sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all 3 parameters. The description does not add significant meaning beyond the schema, merely restating the purpose. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: comparing a primary token against up to 3 others. It uses specific verbs and resources, and distinguishes from sibling tools that likely focus on single-token analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for comparing tokens but lacks explicit guidance on when to use this tool versus alternatives like token_research or token_intel. No when-not-to-use scenarios are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_intelAInspect

Lightweight token lookup: price, market cap, volume, and basic risk assessment for any EVM or Solana token.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
addressYesToken contract address or name
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It reveals that the tool performs a 'basic risk assessment' and retrieves pricing data, but it does not disclose behavioral traits like data freshness, rate limits, or whether it is read-only. This is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded and concise. Every word adds value, clearly stating the tool's purpose and scope without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema and no output schema, the description lists the key data fields returned (price, market cap, volume, risk assessment), which is sufficient for an agent to understand what to expect. Could be more explicit about the output format, but overall complete for a lightweight lookup.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description does not need to add much. It mentions 'EVM or Solana token' which aligns with the chain enum, and the schema already describes the address parameter as 'Token contract address or name'. No additional meaning is provided beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb (lookup) and resource (token), and clearly indicates the scope (EVM or Solana tokens). It distinguishes itself from siblings like token_compare and token_research by emphasizing 'lightweight' lookups including price, market cap, volume, and risk assessment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for quick token data checks but provides no explicit guidance on when to use this tool vs. alternatives such as token_research or token_risk_metrics. No when-not-to-use or context for exclusion is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_researchBInspect

Multi-source token intelligence: market data, DeFi metrics, contract verification, prediction markets, holder analysis, price history, and AI risk assessment. Aggregates CoinGecko, DeFiLlama, Etherscan, and Polymarket.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
queryYesToken name, symbol, or contract address
includeNoData modules to include (default: market_data, defi_metrics, contract_info, risk_assessment)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description does not disclose behavioral traits such as external API calls, rate limits, or authentication requirements. It only lists sources without explaining potential side effects or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load key capabilities and sources with no redundant information. Well structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description provides a high-level overview but lacks detail on return format, pagination, or constraints. For a tool with no output schema and moderate complexity, more context would improve usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents parameters. The description adds limited value by mentioning data sources but does not map them to the 'include' options explicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a multi-source token intelligence tool listing specific data categories and sources. However, it does not differentiate from siblings like token_intel or token_compare, which may have overlapping purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs alternatives. The description lacks any contextual advice on appropriate use cases or when to avoid it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

token_risk_metricsAInspect

Quantitative risk metrics for any ERC-20 token: holder concentration (top 10 holder %), contract permissions (can mint/burn/pause/blacklist), liquidity depth vs market cap, deployer history, and weighted composite risk score (0-100). Pre-computed for top tokens, live-computed for others.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
addressYesToken contract address (0x-prefixed)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description adequately discloses behavior: it notes that metrics are pre-computed for top tokens and live-computed for others, indicating computational approach. It does not mention side effects, but the tool is clearly read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence lists all metrics upfront; the second adds crucial computational context. Perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description lists all expected return components. It covers the tool's scope, computational behavior, and token focus. No gaps given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add any extra meaning beyond the schema fields (chain and address). The parameter names and descriptions in the schema are already clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it computes quantitative risk metrics for ERC-20 tokens, listing specific metrics (holder concentration, permissions, liquidity, deployer history, composite score). This verb+resource combination is unique among siblings like token_intel or token_compare.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it does not explicitly name alternatives, the comprehensive list of metrics and the mention of pre/live computation provide clear context for when to use this tool. It implies it is for risk assessment rather than general token lookup or comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

translateAInspect

Translate text to any language with tone control (formal/casual/technical). Auto-detects source language. Preserves formatting and cultural nuances.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText to translate
toneNoTranslation toneformal
sourceLanguageNoSource language (auto-detected if omitted)
targetLanguageYesTarget language (e.g., Spanish, French, Japanese)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must cover behavioral traits. It discloses that source language auto-detection is available, formatting and cultural nuances are preserved, and tone can be specified. This adds value beyond basic purpose, though it omits details like rate limits, error handling, or synchronous vs. asynchronous execution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that front-loads the primary action ('Translate text to any language'), immediately followed by key features. Every word serves a purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description should ideally explain return value format. It mentions preserving formatting and cultural nuances, but does not explicitly state what is returned (e.g., translated text string). Error handling, max length (20k from schema is not in description), and supported languages are also omitted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, meeting the baseline of 3. The description restates the tone enum and auto-detection behavior already present in the schema, but does not add significant new parameter semantics beyond reaffirming existing constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: translating text to any language with tone control, auto-detection of source language, and preservation of formatting and cultural nuances. It uses a specific verb ('Translate') and resource ('text'), and notably, no sibling tool performs translation, making it easily distinguishable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for translation tasks with tone control but lacks explicit guidance on when to use versus alternatives. There are no siblings for translation, so no exclusions are needed, but it does not include when-not-to-use scenarios or prerequisites such as supported language lists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tx_decodeAInspect

Decode any EVM transaction: function call, parameters, token transfers, and plain-English explanation.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain networkethereum
txHashYesTransaction hash (0x-prefixed, 64 hex chars)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It lists outputs but omits potential limitations, error handling, or behavior when inputs are invalid. It does not contradict any annotations since none are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the core purpose and outputs. No redundant words, and the critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema), the description covers key outputs. Minor gap: no mention of error handling or chain-specific behavior, but overall fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter has a description. The tool description does not add further meaning beyond what the schema provides, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'decode' and the resource 'any EVM transaction', listing specific outputs (function call, parameters, token transfers, plain-English explanation). This distinguishes it from sibling tools that perform different actions like approval scanning or code review.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for decoding transactions but provides no explicit guidance on when to use this tool versus alternatives, nor any conditions or exclusions. It lacks context like prerequisites or constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

wallet_safetyAInspect

Comprehensive wallet safety check: scans approvals, analyzes recent transaction activity for suspicious patterns, and assesses target contract risk. Returns composite risk score (0-100), risk level, action items, and related service suggestions. Supports EVM chains and Solana.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain network (includes Solana support)ethereum
depthNoAnalysis depth: quick (approvals only), standard (approvals + activity), deep (extended history + all patterns)standard
walletAddressYesWallet address to check (0x-prefixed for EVM, base58 for Solana)
targetContractNoOptional target contract address to assess before interaction
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that it scans approvals, analyzes activity, and assesses contract risk, and mentions the return type. However, it does not explicitly state whether the tool is read-only or any auth/rate limit considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, consisting of two sentences that front-load the main purpose, then summarize outputs and supported chains. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description adequately explains return values. Parameter descriptions are present. The tool's scope and supported chains are clear. Minor omission: not stating whether it is read-only, but overall complete for a safety check tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 4 parameters are described in the schema with 100% coverage. The description adds context about supported chains and output types, but does not significantly enhance individual parameter semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it performs a comprehensive wallet safety check including approvals, activity analysis, and contract risk assessment, returning composite risk score, risk level, action items, and suggestions. It distinguishes from siblings like approval_scan (specific to approvals) and code_review.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for full wallet safety, but lacks explicit guidance on when to use this tool vs. alternatives like approval_scan or contract_docs. No when-not or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.