Wolfpack Intelligence
Server Details
On-chain security and intelligence for Base chain trading agents. Token risk analysis, security checks, narrative momentum, and agent trust scores.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 10 of 10 tools scored. Lowest: 3.4/5.
Most tools have distinct purposes, but some overlap exists. For example, 'security_check' and 'token_risk_analysis' both involve token security, though the latter is more comprehensive. Similarly, 'il_calculator' and 'yield_scanner' both handle impermanent loss, but with different focuses. Descriptions help differentiate them, but agents might occasionally confuse related tools.
Naming conventions are mixed, with some tools using snake_case (e.g., 'agent_credit_risk_index') and others using more descriptive phrases (e.g., 'graduation_readiness_check'). There's no consistent verb_noun pattern, but names are generally readable and reflect their functions, albeit with varying styles that could be more uniform.
With 10 tools, the count is well-scoped for a financial intelligence server covering areas like agent analysis, token markets, and risk assessment. Each tool appears to serve a specific purpose without redundancy, making the set manageable and focused on the server's domain.
The toolset covers key areas in financial and crypto intelligence, including agent scoring, market data, risk analysis, and technical indicators. Minor gaps exist, such as no direct tool for historical data analysis or portfolio management, but core workflows are well-supported, allowing agents to perform comprehensive assessments without major dead ends.
Available Tools
10 toolsagent_credit_risk_indexAgent Credit Risk IndexARead-onlyIdempotentInspect
Financial credit risk scoring for ACP agents. Three pillars: realized liquidity (USDC balance vs job fees, 40%), execution reliability (success rate weighted by volume, 40%), wallet maturity (age + VIRTUAL holdings, 20%). Returns credit score 0-100 with rating (prime/standard/subprime/unrated). Handles liquid-lean agents (high volume, low balance) without penalty. $1.00/query.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Virtuals ACP agent ID (integer) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure beyond annotations: specifies exact calculation weights (40%/40%/20%), output taxonomy (prime/standard/subprime/unrated), cost ($1.00/query), and edge case handling (liquid-lean agents). Annotations confirm read-only/idempotent but description adds rich operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five information-dense sentences covering purpose, methodology, output format, edge cases, and cost. No redundancy. Well-structured with purpose front-loaded ('Financial credit risk scoring...').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description fully documents return values (0-100 score with categorical ratings). Complex financial calculation logic is transparently disclosed. Cost disclosure ($1.00/query) enables informed agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with 'Virtuals ACP agent ID (integer)'. Description references 'ACP agents' confirming context but does not add parameter-specific semantics beyond schema. Baseline 3 appropriate given complete schema self-documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Financial credit risk scoring for ACP agents' with specific methodology (three weighted pillars). Distinguishes from sibling 'agent_trust_score' by focusing on financial metrics (USDC, VIRTUAL holdings) rather than general reputation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through detailed methodology explanation (financial credit risk vs general trust), but lacks explicit when-to-use/when-not-to-use guidance or direct comparison to 'agent_trust_score' sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_trust_scoreAgent Trust ScoreARead-onlyIdempotentInspect
Composite agent reliability rating. Scores any ERC-8004/ACP registered agent on 4 pillars: ACP performance (40%), network position (25%), operational health (25%), metadata compliance (10%). Accepts agent_address (0x hex) or agent_id (number). Returns trust score 0-100 with breakdown.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | No | ACP/ERC-8004 numeric agent ID | |
| agent_address | No | 0x-prefixed wallet address of the agent to score |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety, so description adds value by explaining the scoring algorithm (weighted pillars) and return format (0-100 with breakdown). It discloses domain constraints (ERC-8004/ACP registered agents only) not present in annotations. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two information-dense sentences with zero waste. First sentence establishes purpose and methodology; second covers inputs/outputs. Front-loaded with the core value proposition (composite reliability rating) and efficiently packs methodology details (specific percentages) without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately compensates by specifying the return value structure (0-100 score with breakdown). It explains the complex scoring methodology sufficiently for an agent to understand what the tool evaluates. Minor gap: doesn't specify behavior if both ID and address are provided simultaneously.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds crucial semantic context that the two parameters are alternatives ('or' relationship) since neither is required in schema, plus format reinforcement (0x hex, number). This clarifies input expectations beyond what the structured schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly defines the tool as a 'Composite agent reliability rating' with specific methodology (4 pillars with explicit weights). It distinguishes from siblings like 'agent_credit_risk_index' and 'security_check' by specifying it evaluates ERC-8004/ACP registered agents on trust/reliability metrics rather than financial credit or security vulnerabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by detailing the four scoring pillars (performance, network position, operational health, compliance), which signals when to use this for reliability assessment. However, it lacks explicit contrast with sibling 'agent_credit_risk_index' or guidance on when to prefer this over 'security_check' for agent evaluation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
graduation_readiness_checkGraduation Readiness CheckBRead-onlyInspect
Live graduation readiness audit for ACP agents ($5.00 USDC). Fires real test jobs against your services, scores job lifecycle handling, schema correctness, and output consistency. Provides blockers, warnings, and remediation steps.
| Name | Required | Description | Default |
|---|---|---|---|
| offering_name | No | Specific service offering to check (default: all) | |
| target_agent_address | Yes | 0x-prefixed wallet address of the ACP agent to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description reveals critical behavioral traits (costs $5.00 USDC, 'fires real test jobs') that contradict the readOnlyHint=true annotation. Read-only hints imply safe, side-effect-free operations, but firing real test jobs against services consumes target resources and incurs charges, representing a serious inconsistency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences: purpose+cost, mechanism+scoring criteria, and output format. Every word earns its place with no redundancy. Well front-loaded with the core value proposition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by detailing specific deliverables ('blockers, warnings, and remediation steps') and evaluation dimensions ('job lifecycle handling, schema correctness'). Includes cost transparency. Minor gap: could clarify what 'ACP' stands for or graduation criteria specifics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (target_agent_address and offering_name fully documented). Description references 'your services' which loosely maps to parameters but does not add syntax, format, or semantic details beyond what the schema already provides. Baseline 3 appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('audit'), target resource ('ACP agents'), and distinct scope ('graduation readiness' with live testing). Clearly differentiates from siblings like 'security_check' or 'agent_trust_score' by emphasizing the graduation context and real job firing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The $5.00 USDC cost provides implicit usage guidance (use when serious about graduation), but lacks explicit when-to-use criteria or comparison to sibling tools like 'security_check'. No guidance on prerequisites or when NOT to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
il_calculatorImpermanent Loss CalculatorARead-onlyIdempotentInspect
Calculate impermanent loss for standard AMM (x*y=k) or concentrated liquidity (V3) positions. Optionally compute net yield after IL with APY.
| Name | Required | Description | Default |
|---|---|---|---|
| pool_type | No | standard (default) or concentrated | |
| entry_price | Yes | Price at LP entry | |
| price_lower | No | Lower range bound (concentrated only) | |
| price_upper | No | Upper range bound (concentrated only) | |
| holding_days | No | Holding period in days (default: 7) | |
| current_price | Yes | Current price | |
| gross_apy_percent | No | Gross APY for net yield calculation | |
| position_size_usd | Yes | Position size in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint, destructiveHint) and idempotency. Description adds valuable behavioral context not in annotations: the mathematical models used (x*y=k for standard, V3 for concentrated) and the optional net yield calculation feature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loads core functionality with specific technical differentiators (x*y=k, V3). Second sentence covers optional feature. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong functional description covers both calculation modes and optional yield computation. Comprehensive schema and solid annotations support completeness. Minor gap: no output schema or description of return value format (percentage vs decimal vs complex object).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds conceptual context (x*y=k mathematical model for standard, V3 mechanics for concentrated) that enriches understanding of pool_type, but does not elaborate on parameter formats or relationships beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verbs ('Calculate', 'compute') and resources ('impermanent loss', 'standard AMM x*y=k', 'concentrated liquidity V3 positions'). It clearly distinguishes from siblings like yield_scanner or token_risk_analysis by focusing specifically on IL mathematics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through specific domain terminology (IL, AMM types), but lacks explicit when/when-not guidance. Does not clarify relationship to sibling yield_scanner despite mentioning 'net yield', which could confuse tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prediction_marketPrediction Market AnalysisARead-onlyInspect
Crypto prediction market data from Polymarket. Returns current odds, volume, and liquidity for crypto-relevant events.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of markets to return (default: 10, max: 50) | |
| query | No | Search query to filter markets (e.g. "bitcoin", "ethereum") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With annotations declaring readOnlyHint=true, the description adds valuable behavioral context by specifying the external data source (Polymarket) and the specific data structure returned (odds, volume, liquidity). It does not contradict the read-only safety profile and implies real-time data ('current odds').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero redundancy. First sentence establishes source and domain; second specifies return values. Every word earns its place with no filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with 2 optional parameters and full annotation coverage, the description adequately compensates for the missing output schema by explicitly stating the three return value categories (odds, volume, liquidity). Minor gap regarding pagination behavior or rate limiting prevents a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for both parameters (limit and query), so the structured schema carries the semantic load. The description mentions 'crypto-relevant events' which loosely maps to the query parameter's purpose, but adds no syntax or format guidance beyond the schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb-resource combination ('Returns current odds, volume, and liquidity') with explicit data source (Polymarket). Clearly distinguishes from siblings like token_market_snapshot and technical_analysis by specifying 'prediction market' and 'crypto-relevant events' rather than price data or chart analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual signals for when to use the tool through specific domain terminology ('Polymarket', 'odds', 'prediction market') that naturally guides selection for event-based probability queries. However, lacks explicit 'when not to use' guidance contrasting with sibling market data tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
security_checkQuick Security CheckARead-onlyIdempotentInspect
Fast GoPlus-only token security scan. Returns honeypot status, contract verification, ownership flags, and holder concentration. Sub-second response time. No social or on-chain analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Blockchain network. Default: "base". Supports: base, ethereum | |
| token_address | Yes | 0x-prefixed contract address to scan for security issues |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only/idempotent safety, but description adds valuable operational context: 'Sub-second response time' (performance SLA), 'GoPlus-only' (data source/provider), and specific return field previews. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: sentence 1 defines the service (GoPlus scan), sentence 2 lists outputs, sentence 3 states performance, sentence 4 states limitations. Front-loaded with critical distinguishing characteristics.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates by enumerating four specific return fields and performance characteristics. Combined with rich annotations, this provides sufficient context for agent selection, though explicit error conditions would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both parameters fully documented), establishing baseline 3. Description implies the token_address via 'token security scan' context but does not add syntax guidance, validation rules, or usage examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource ('token security scan'), distinguishes scope with 'GoPlus-only' and 'Fast', and explicitly lists four specific return data points (honeypot status, contract verification, ownership flags, holder concentration) that clearly differentiate it from sibling analysis tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides negative constraints ('No social or on-chain analysis') that help scope the tool, but does not explicitly name sibling alternatives (e.g., token_risk_analysis) or state positive conditions like 'use this when you need a quick preliminary scan vs deep analysis'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
technical_analysisTechnical AnalysisARead-onlyIdempotentInspect
Deterministic TA indicators from GeckoTerminal OHLCV candle data. RSI(14), SMA(20/50), Bollinger Bands, support/resistance levels, volume profile, trend direction.
| Name | Required | Description | Default |
|---|---|---|---|
| timeframe | No | Candle timeframe: 15m, 1h (default), 4h, 1d | |
| token_address | Yes | 0x-prefixed token contract address on Base |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly/idempotent/safety properties. The description adds valuable behavioral context: the external data dependency (GeckoTerminal), calculation specifics (RSI 14-period, SMA 20/50), and deterministic nature. It could improve by noting rate limits or latency characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence with zero filler. Front-loads the data source and methodology, then efficiently lists the seven indicator types returned. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by enumerating the specific technical indicators returned. Combined with strong annotations and 100% input schema coverage, this provides sufficient context for invocation, though response structure details would further improve it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both token_address (0x-prefixed Base address) and timeframe (candle intervals). The description references 'GeckoTerminal OHLCV candle data' which reinforces the parameter purposes, but does not add syntax details or validation rules beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (deterministic TA indicators), data source (GeckoTerminal OHLCV), and enumerates exact outputs (RSI(14), SMA(20/50), Bollinger Bands, etc.). This specificity inherently distinguishes it from siblings like token_market_snapshot or token_risk_analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the indicator list implies this is for technical chart analysis versus fundamental data, the description lacks explicit guidance on when to select this over siblings like token_market_snapshot or when technical versus risk analysis is appropriate. No 'when-not-to-use' or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_market_snapshotToken Market SnapshotARead-onlyIdempotentInspect
Comprehensive market data snapshot for any Base token via DexScreener. Returns price, volume (1h/6h/24h), liquidity, buy/sell ratio, price changes, market cap, FDV, and DEX pair info. Sub-second response.
| Name | Required | Description | Default |
|---|---|---|---|
| token_address | Yes | 0x-prefixed token contract address on Base chain | |
| include_all_pairs | No | Include details for all DEX pairs (default: false, primary pair only) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover safety (readOnly, idempotent), the description adds valuable context: data provenance (DexScreener), specific metrics returned (1h/6h/24h volume, FDV), and performance characteristics ('Sub-second response'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with zero waste: data source/scope, specific return values, and performance. Front-loaded with essential information. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by explicitly enumerating return fields (price changes, market cap, buy/sell ratio, etc.). Combined with strong annotations, provides sufficient context for invocation despite no structured output definition.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing full parameter documentation. Description mentions 'any Base token' which aligns with token_address, but does not elaborate on parameter semantics beyond what the schema already provides. Baseline 3 appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Comprehensive market data snapshot') and resource (Base token data via DexScreener). Clearly distinguishes from sibling tools like 'technical_analysis' and 'token_risk_analysis' by specifying it returns raw market metrics (price, volume, liquidity) rather than derived analysis or risk scores.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through specification of data source (DexScreener) and return types, but lacks explicit when-to-use guidance versus siblings like 'technical_analysis' or 'token_risk_analysis'. No mention of prerequisites (e.g., needing a valid Base token address).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
token_risk_analysisToken Risk AnalysisARead-onlyIdempotentInspect
Comprehensive multi-source token risk scoring. Combines GoPlus honeypot detection, DexScreener liquidity analysis, Dune on-chain analytics, and SocialData narrative monitoring. Returns a 0-100 risk score with detailed check results. Response time ~3-5 seconds.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Blockchain network. Default: "base". Supports: base, ethereum | |
| token_address | Yes | 0x-prefixed contract address on Base chain (40 hex characters) | |
| analysis_depth | No | Analysis depth: "standard" (GoPlus + DexScreener, ~3s) or "deep" (adds Dune + social, ~8s) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety profile; description adds critical behavioral context including specific external data sources queried (GoPlus, DexScreener, Dune, SocialData), output format (0-100 score with check results), and approximate latency. Does not mention failure modes for invalid addresses, but otherwise comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence covers capability and data sources; second covers output format and performance. Every clause earns its place with no redundancy or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter aggregation tool without output schema, the description adequately compensates by detailing the return structure (0-100 score with check results) and data provenance. Sufficient for an agent to understand the integration surface, though explicit error handling notes would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3). Description adds value by mapping abstract 'standard'/'deep' depth parameter to concrete data sources (GoPlus+DexScreener vs Dune+social), helping agents understand the quality/coverage trade-off implied by the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific verb ('scoring') and resource ('token risk') with unique methodology (GoPlus honeypot detection, DexScreener liquidity, etc.) that distinguishes it from siblings like security_check or technical_analysis. The 0-100 score output further differentiates its specific purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides timing estimates (~3-5 seconds) implying latency trade-offs, but lacks explicit guidance on when to use 'deep' versus 'standard' analysis or when to prefer this tool over siblings like security_check or token_market_snapshot. Usage is implied by capability listing rather than explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yield_scannerYield ScannerBRead-onlyInspect
IL-adjusted Base yield opportunities. Combines DefiLlama yields with IL analysis, ranks pools by risk-adjusted net APY.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top pools (default: 10, max: 50) | |
| min_tvl_usd | No | Minimum TVL filter in USD (default: 10000) | |
| holding_days | No | Holding period for IL calculation (default: 30) | |
| stablecoin_only | No | Only return stablecoin pools |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true. Description adds valuable methodology context (DefiLlama data source, IL analysis approach) beyond annotations. However, lacks disclosure of output format, pagination behavior, or data freshness since no output schema exists. Does not mention if results are cached or real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. Front-loaded with key value proposition (IL-adjusted yields). Domain-specific terminology (DefiLlama, APY) is appropriately dense for the intended audience. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a read-only scanner with 100% schema coverage and safety annotations. Describes the ranking methodology adequately. Minor gap: without output schema, could briefly mention return structure (list of pools with APY metrics) to complete the contract, though 'ranks pools' provides some indication.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description mentions 'IL-adjusted' which conceptually links to holding_days parameter, and 'pools' which contextually supports limit parameter, but does not add syntax details, format examples, or semantic clarifications beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific actions (combines DefiLlama yields with IL analysis, ranks pools) and resource (yield opportunities). Mentions methodology (risk-adjusted net APY) that distinguishes it from sibling il_calculator. However, 'Base' is ambiguous (could mean Base chain or basic yields) and doesn't explicitly clarify relationship to token_risk_analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or alternative suggestions. Does not clarify when to use this versus il_calculator (which likely calculates IL for specific positions) versus token_risk_analysis. Agent must infer usage from the domain terminology alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!