Wolfpack Intelligence
Server Details
On-chain security and market intelligence for trading agents on Base.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
10 toolsagent_credit_risk_indexAgent Credit Risk IndexARead-onlyIdempotentInspect
Financial credit risk scoring for ACP agents. Three pillars: realized liquidity (USDC balance vs job fees, 40%), execution reliability (success rate weighted by volume, 40%), wallet maturity (age + VIRTUAL holdings, 20%). Returns credit score 0-100 with rating (prime/standard/subprime/unrated). Handles liquid-lean agents (high volume, low balance) without penalty. $1.00/query.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Virtuals ACP agent ID (integer) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety profile; description adds substantial behavioral context including exact algorithm weights (40/40/20), rating categories (prime/standard/subprime/unrated), cost per query, and edge-case handling logic for liquid-lean agents.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense structure with zero waste: purpose statement, methodology breakdown, output specification, edge-case handling, and pricing—each sentence earns its place. Front-loaded with the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, description comprehensively documents return values (0-100 score with ratings), explains the complex scoring algorithm, discloses pricing, and covers edge cases—sufficient for a financial scoring tool with simple input requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single agent_id parameter. Description reinforces the 'ACP agents' context but does not add syntax constraints, examples, or semantic details beyond what the schema already provides; baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Financial credit risk scoring for ACP agents' with specific verb and resource. The detailed three-pillar methodology (liquidity, execution reliability, wallet maturity) effectively distinguishes this from sibling tool agent_trust_score by emphasizing financial metrics over reputation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage guidance through cost disclosure ('$1.00/query') and special handling notes ('Handles liquid-lean agents without penalty'), but lacks explicit when-to-use comparisons or named alternatives versus siblings like agent_trust_score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_trust_scoreAgent Trust ScoreARead-onlyIdempotentInspect
Composite agent reliability rating. Scores any ERC-8004/ACP registered agent on 4 pillars: ACP performance (40%), network position (25%), operational health (25%), metadata compliance (10%). Accepts agent_address (0x hex) or agent_id (number). Returns trust score 0-100 with breakdown.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | No | ACP/ERC-8004 numeric agent ID | |
| agent_address | No | 0x-prefixed wallet address of the agent to score |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/idempotent safety; description adds substantial behavioral context: calculation methodology (4 pillars with percentage weights), input format constraints (0x hex), and output structure (0-100 with breakdown). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: definition + methodology detail, input specification, output specification. No redundancy. Percentage breakdown earns its place by explaining calculation weights. Front-loaded with key resource identification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by detailing return value range (0-100) and structure ('breakdown'). 100% input schema coverage and rich methodology explanation (4 pillars) provide sufficient context for a read-only lookup tool. Minor gap regarding whether exactly one parameter is required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value by clarifying the XOR relationship between inputs ('or') and specifying address format ('0x hex'), semantics not fully captured by schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Scores' with clear resource 'ERC-8004/ACP registered agent'. Distinguishes from siblings like agent_credit_risk_index (credit vs trust) and token_risk_analysis (token vs agent) by specifying domain-specific target agents and methodology (4 pillars with weights).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear domain scope (ERC-8004/ACP agents) and input options ('Accepts agent_address...or agent_id') which implies when to use, but lacks explicit 'when-not' guidance or comparison to siblings like agent_credit_risk_index. Usage is inferred from functional description rather than explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
graduation_readiness_checkGraduation Readiness CheckARead-onlyInspect
Live graduation readiness audit for ACP agents ($5.00 USDC). Fires real test jobs against your services, scores job lifecycle handling, schema correctness, and output consistency. Provides blockers, warnings, and remediation steps.
| Name | Required | Description | Default |
|---|---|---|---|
| offering_name | No | Specific service offering to check (default: all) | |
| target_agent_address | Yes | 0x-prefixed wallet address of the ACP agent to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover readOnly/destructive hints, so the description's value lies in disclosing the $5.00 USDC cost (financial impact), explaining that it fires 'real test jobs' (operational mechanism), and detailing the three output categories (blockers, warnings, remediation steps) which provides necessary context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence one establishes identity and cost, sentence two explains the testing methodology, sentence three defines deliverables. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately compensates for the missing output schema by enumerating the three output components (blockers, warnings, remediation). It covers cost, methodology, and return concepts. A minor gap remains regarding whether the operation is synchronous or if results are delivered via callback.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both parameters documented). The description does not add syntax details or parameter constraints beyond the schema, but the high schema coverage means the baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies a clear verb-resource combination ('Live graduation readiness audit') and distinguishes from siblings like agent_trust_score or security_check by detailing the specific methodology: firing real test jobs against services to evaluate job lifecycle handling, schema correctness, and output consistency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the $5.00 USDC cost, which implies usage constraints, but provides no explicit guidance on when to use this versus siblings like security_check or technical_analysis, nor does it mention prerequisites or when-not-to-use conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
il_calculatorImpermanent Loss CalculatorARead-onlyIdempotentInspect
Calculate impermanent loss for standard AMM (x*y=k) or concentrated liquidity (V3) positions. Optionally compute net yield after IL with APY.
| Name | Required | Description | Default |
|---|---|---|---|
| pool_type | No | standard (default) or concentrated | |
| entry_price | Yes | Price at LP entry | |
| price_lower | No | Lower range bound (concentrated only) | |
| price_upper | No | Upper range bound (concentrated only) | |
| holding_days | No | Holding period in days (default: 7) | |
| current_price | Yes | Current price | |
| gross_apy_percent | No | Gross APY for net yield calculation | |
| position_size_usd | Yes | Position size in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive safety. The description adds valuable domain context by specifying the mathematical models (x*y=k, V3) and clarifying the dual calculation modes. Does not disclose output format or edge case handling (e.g., price=0), but effectively leverages the annotation safety profile to focus on functional scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero redundancy. First sentence establishes core purpose and supported protocols; second sentence identifies the optional calculation path. Front-loaded structure places the primary verb ('Calculate') and subject ('impermanent loss') immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of the 8-parameter input space and dual calculation modes. While no output schema exists, the description adequately conveys the tool's function as a financial calculator. Given the solid annotations and complete input schema, the description successfully covers the tool's complexity without requiring return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds semantic value by mapping 'pool_type' to specific AMM implementations ('standard AMM', 'concentrated liquidity') and linking 'gross_apy_percent' to the 'net yield after IL' concept, providing business context beyond the technical schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Calculate impermanent loss' names the exact operation, while 'standard AMM (x*y=k)' and 'concentrated liquidity (V3)' precisely scope the supported protocols. This clearly distinguishes it from sibling analysis tools like technical_analysis or yield_scanner which perform broader market functions rather than specific LP math.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage guidance by noting the optional net yield calculation ('Optionally compute net yield after IL with APY'), which signals when to include the gross_apy_percent parameter. However, lacks explicit guidance on when to select this tool versus siblings like yield_scanner or token_risk_analysis, and includes no prerequisites or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prediction_marketPrediction Market AnalysisARead-onlyInspect
Crypto prediction market data from Polymarket. Returns current odds, volume, and liquidity for crypto-relevant events.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of markets to return (default: 10, max: 50) | |
| query | No | Search query to filter markets (e.g. "bitcoin", "ethereum") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations confirm the read-only, non-destructive nature of the operation, the description adds valuable behavioral context by disclosing the data source (Polymarket), the specific fields returned (odds, volume, liquidity), and the domain filter (crypto-relevant events). This compensates for the lack of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences that front-load the data source and exactly specify the return payload without redundancy. Every word earns its place, and there is no filler or tautological restatement of the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description appropriately compensates by detailing the specific data fields returned (odds, volume, liquidity). Combined with comprehensive annotations and simple parameter structures, this provides sufficient context for a read-only data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured schema already documents both parameters including examples (e.g., 'bitcoin', 'ethereum' for the query parameter). The description provides no additional parameter semantics beyond what the schema explicitly declares, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource as 'Crypto prediction market data from Polymarket' and specifies the returned metrics (odds, volume, liquidity), which distinguishes it from siblings like token_market_snapshot or technical_analysis. However, it does not explicitly contrast its use case against these related market data tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states what the tool returns but provides no guidance on when to use it versus alternatives like token_market_snapshot, nor does it mention prerequisites such as specific market IDs or required query formats. Usage must be inferred entirely from the purpose statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
security_checkQuick Security CheckARead-onlyIdempotentInspect
Fast GoPlus-only token security scan. Returns honeypot status, contract verification, ownership flags, and holder concentration. Sub-second response time. No social or on-chain analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Blockchain network. Default: "base". Supports: base, ethereum | |
| token_address | Yes | 0x-prefixed contract address to scan for security issues |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read/idempotent/non-destructive safety. The description adds crucial behavioral context not in structured fields: data source (GoPlus), performance characteristics (sub-second), and specific return value preview (honeypot status, ownership flags) despite lacking an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences: tool definition, return values, performance constraint, and scope limitation. Every sentence earns its place with zero redundancy. Key differentiators ('GoPlus-only', 'Fast') are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and comprehensive annotations, the description compensates well for the missing output schema by enumerating return fields. It effectively distinguishes the tool's scope from siblings. Minor gap: no mention of rate limits or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with both chain (including defaults and supported values) and token_address (0x-prefixed format) fully documented. The description mentions 'GoPlus-only' but doesn't elaborate on parameter syntax or semantics beyond what the schema provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs (scan) and resource (token) and clearly distinguishes from siblings via 'GoPlus-only' and 'No social or on-chain analysis', positioning it distinctly from tools like token_risk_analysis that presumably offer broader analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual signals for selection via 'Fast' and 'Sub-second response time' (when speed matters) and explicit scope limitations ('No social or on-chain analysis'). However, it stops short of explicitly naming sibling alternatives like token_risk_analysis for comprehensive checks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
technical_analysisTechnical AnalysisARead-onlyIdempotentInspect
Deterministic TA indicators from GeckoTerminal OHLCV candle data. RSI(14), SMA(20/50), Bollinger Bands, support/resistance levels, volume profile, trend direction.
| Name | Required | Description | Default |
|---|---|---|---|
| timeframe | No | Candle timeframe: 15m, 1h (default), 4h, 1d | |
| token_address | Yes | 0x-prefixed token contract address on Base |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/safety profile, so description carries lower burden. Adds valuable context about data source (GeckoTerminal) and specific indicator parameters (RSI(14), SMA(20/50)), but lacks disclosure of error behaviors (invalid contract handling), data freshness, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence with zero waste. Information hierarchy places data source first, then indicator taxonomy. Every term conveys specific technical meaning (OHLCV, RSI(14), volume profile).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Good completeness for read-only analysis tool given strong annotations and 100% schema coverage. Enumerates specific calculated outputs compensating for missing output schema. Minor gap: no mention of return structure format or array lengths.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both token_address and timeframe fully described in JSON schema), establishing baseline 3. Description mentions neither parameter nor their relationships (e.g., that timeframe affects calculation windows), but schema compensates adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource combination ('Deterministic TA indicators from GeckoTerminal OHLCV candle data') clearly distinguishes from siblings like token_market_snapshot or token_risk_analysis. Lists exact indicators calculated (RSI, SMA, Bollinger Bands, etc.), making scope unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like token_market_snapshot (which may also provide price data) or prediction_market. No mention of prerequisites (e.g., valid Base chain tokens only) or when results might be unavailable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_market_snapshotToken Market SnapshotARead-onlyIdempotentInspect
Comprehensive market data snapshot for any Base token via DexScreener. Returns price, volume (1h/6h/24h), liquidity, buy/sell ratio, price changes, market cap, FDV, and DEX pair info. Sub-second response.
| Name | Required | Description | Default |
|---|---|---|---|
| token_address | Yes | 0x-prefixed token contract address on Base chain | |
| include_all_pairs | No | Include details for all DEX pairs (default: false, primary pair only) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Good augmentation of annotations: adds data source attribution (DexScreener), performance characteristic ('Sub-second response'), and detailed return payload structure. Does not disclose error handling (e.g., unaudited tokens) or rate limits, but annotations already cover safety profile (readOnly/idempotent).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose front-loaded, followed by return value enumeration, ending with performance note. Every clause earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given constraints: describes return data structure in lieu of output schema, covers both parameters adequately. Minor gap: no mention of error states or data freshness/staleness expectations for financial data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions. Description adds value by contextualizing 'Base token' for the address parameter and referencing 'DEX pair info' which aligns with include_all_pairs. Slightly above baseline 3 due to chain-specific context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'snapshot' + resource 'Base token' + source 'via DexScreener' clearly defines scope. Distinguishes from siblings like token_risk_analysis (assessment) and technical_analysis (charting) by emphasizing comprehensive market data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through returned field list (use when needing price/volume/liquidity), but lacks explicit when-to-use guidance versus siblings like token_risk_analysis or technical_analysis. No mentions of prerequisites or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_risk_analysisToken Risk AnalysisARead-onlyIdempotentInspect
Comprehensive multi-source token risk scoring. Combines GoPlus honeypot detection, DexScreener liquidity analysis, Dune on-chain analytics, and SocialData narrative monitoring. Returns a 0-100 risk score with detailed check results. Response time ~3-5 seconds.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Blockchain network. Default: "base". Supports: base, ethereum | |
| token_address | Yes | 0x-prefixed contract address on Base chain (40 hex characters) | |
| analysis_depth | No | Analysis depth: "standard" (GoPlus + DexScreener, ~3s) or "deep" (adds Dune + social, ~8s) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable operational context beyond annotations: specifies external data providers (GoPlus, DexScreener, Dune, SocialData), discloses response latency (~3-5 seconds), and clarifies output structure (0-100 score with detailed check results), which compensates for the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly-constructed sentences with zero redundancy: sentence 1 declares purpose, sentence 2 lists data sources, sentence 3 specifies return format, and sentence 4 provides timing. Information density is optimal with strong front-loading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and strong annotations, the description successfully compensates for the missing output schema by specifying the risk score range and detailed check results. Could further explain what specific risks are evaluated (rug pull, liquidity drain, etc.) to achieve a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing complete parameter documentation. The description mentions timing implications that map to the 'analysis_depth' parameter choices, but does not substantially augment the schema's already-excellent parameter definitions for token_address, chain, or analysis_depth.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifies exact action ('risk scoring') and resource type ('token') while distinguishing from siblings like 'security_check' or 'token_market_snapshot' by emphasizing multi-source risk aggregation (GoPlus, DexScreener, Dune, SocialData) and the specific 0-100 score output format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through the contrast between data sources (honeypot vs liquidity vs narrative monitoring) and mentions performance characteristics, but lacks explicit comparison to sibling tools like 'security_check' or guidance on when 'standard' vs 'deep' analysis is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yield_scannerYield ScannerARead-onlyInspect
IL-adjusted Base yield opportunities. Combines DefiLlama yields with IL analysis, ranks pools by risk-adjusted net APY.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top pools (default: 10, max: 50) | |
| min_tvl_usd | No | Minimum TVL filter in USD (default: 10000) | |
| holding_days | No | Holding period for IL calculation (default: 30) | |
| stablecoin_only | No | Only return stablecoin pools |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With readOnlyHint covering safety, the description adds valuable context: data provenance (DefiLlama), calculation methodology (IL analysis), ranking behavior (risk-adjusted net APY), and scope (Base chain). Does not mention rate limits or pagination, but provides key behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences. Front-loaded with the core concept (IL-adjusted Base yields). Second sentence explains methodology and output. Zero redundancy—every word describes unique functionality not found in structured metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong given constraints: 100% input schema coverage and annotations present. Compensates for missing output schema by describing the conceptual return (ranked pools with risk-adjusted net APY). Minus one point because specific output fields aren't detailed despite no output schema being present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema has 100% coverage, the description enhances understanding: 'ranks pools' clarifies the 'limit' parameter returns top N results, and 'IL analysis' contextualizes the 'holding_days' parameter purpose. Adds meaningful semantic layer connecting parameters to the tool's core logic.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('ranks') and resource ('pools') clearly stated. Distinguishes from sibling 'il_calculator' by emphasizing it combines DefiLlama yields with IL analysis to find opportunities, not just calculate IL. 'Base' clarifies the specific chain scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context (finding yield opportunities with IL risk factored in) by describing the unique combination of DefiLlama yields and IL analysis. However, lacks explicit 'use when' guidance or explicit differentiation from 'token_risk_analysis' or 'il_calculator' siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!