Skip to main content
Glama
Ownership verified

Server Details

Per-transaction crypto trade validator for AI agents. Returns deterministic PROCEED / CAUTION / BLOCK verdicts using WaveGuard anomaly detection, history checks, and rug-pull risk analysis.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
cryptoguard_counterfactual_tradeA
Read-onlyIdempotent
Inspect

Stateless what-if sensitivity analysis. Submit baseline + counterfactual variants in one request and receive tipping-point guidance.

ParametersJSON Schema
NameRequiredDescriptionDefault
trainingYesReference normal samples.
base_testYesBaseline sample under evaluation.
field_levelNo0=real field, 1=complex field (default: 1).
sensitivityNoAnomaly sensitivity (default: 1.0).
counterfactual_testsYesWhat-if variants of the baseline sample.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish the operation is read-only, idempotent, and non-destructive. The description adds value by confirming 'Stateless' behavior and explaining the analytical output ('tipping-point guidance'), which compensates partially for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. It front-loads the key concept ('Stateless what-if sensitivity analysis') and immediately follows with input/output expectations, making every word earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description mentions 'tipping-point guidance' to hint at the return value, the absence of an output schema leaves the actual data structure and content unexplained. For a 5-parameter analytical tool, additional detail on the sensitivity analysis results would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description maps 'baseline' to base_test and 'counterfactual variants' to counterfactual_tests, providing conceptual context, but does not add syntax, format details, or explain the training parameter's role beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'what-if sensitivity analysis' and processes 'counterfactual variants,' distinguishing it from the validation-focused siblings (validate_trade, validate_trade_plus). However, it lacks an explicit comparison statement naming alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through terms like 'what-if' and 'tipping-point guidance,' suggesting use for scenario testing and threshold analysis. It does not explicitly state prerequisites, when-not-to-use conditions, or direct comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cryptoguard_healthA
Read-onlyIdempotent
Inspect

Check CryptoGuard API health, version, and service status. No payment required. Use this to verify the service is running.

ParametersJSON Schema
NameRequiredDescriptionDefault
verboseNoReturn detailed health info including uptime and version details (default: false).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive=false. The description adds valuable behavioral context not in annotations: 'No payment required' alerts the agent this is a free-tier operation (critical for crypto/financial tools where siblings likely incur costs). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences total, each earning its place: purpose statement, cost disclosure, and usage directive. Front-loaded with the core action, zero redundancy, and appropriately sized for a single-parameter health check tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple health endpoint with one optional boolean parameter and no output schema, the description is complete. It covers functionality, cost model, and verification use case. Annotations handle safety properties, so no additional behavioral disclosure is required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (verbose parameter fully documented). The description adds no parameter-specific information, but with complete schema coverage, baseline 3 is appropriate—the schema carries the semantic load adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Check') and clearly identifies the resource ('CryptoGuard API health, version, and service status'). It effectively distinguishes from siblings like cryptoguard_validate_trade and cryptoguard_scan_token by being explicitly a meta-service health endpoint rather than an operational crypto tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States explicit use case ('Use this to verify the service is running') and provides critical financial context ('No payment required') that helps distinguish it from paid trading operations. Lacks explicit 'when not to use' contrasts with specific siblings, but the domain separation (health vs trading/scanning) is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cryptoguard_rug_checkA
Read-onlyIdempotent
Inspect

Assess rug pull risk for a specific DEX trading pair. Scores 6 risk factors (0-100): liquidity depth, pair age, volume/liquidity ratio, price action, buy/sell imbalance, and metadata.

Example: check if a new Solana pair is a potential rug pull.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainYesBlockchain (solana, ethereum, base, bsc).
pair_addressYesDEX pair contract address.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent hints, so the description adds valuable behavioral context by detailing the 6 specific risk factors scored (liquidity depth, pair age, etc.) and the 0-100 scoring methodology. It does not contradict annotations (consistent with readOnly assessment).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: the first defines purpose and methodology, the second provides concrete usage context. Every sentence earns its place with zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema and lack of output schema, the description adequately covers the tool's function by explaining the 6 risk factors and scoring range. Rich annotations cover safety profiles. Minor gap: could specify return structure (object vs aggregate score).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for both parameters, the baseline is 3. The description reinforces the example ('Solana pair') aligning with the chain parameter, but does not add syntax details, format constraints, or parameter relationships beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Assess rug pull risk for a specific DEX trading pair' with specific verb (assess) and resource. It distinguishes from siblings like 'scan_token' or 'validate_trade' by focusing specifically on rug pull risk scoring rather than general scanning or trade validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage through the example ('check if a new Solana pair is a potential rug pull'), suggesting when to use it. However, it lacks explicit when-not-to-use guidance or differentiation from sibling tools like 'cryptoguard_scan_token' which might also analyze token safety.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cryptoguard_scan_tokenA
Read-onlyIdempotent
Inspect

Scan a single token for anomalous market behavior using WaveGuard physics-based anomaly detection. Compares the token to TIER-MATCHED peers (microcaps vs microcaps, large-caps vs large-caps). Returns anomaly scores, risk level, and explanations.

Example: scan 'solana' to check if its metrics are unusual.

ParametersJSON Schema
NameRequiredDescriptionDefault
coin_idYesCoinGecko coin ID (e.g., 'bitcoin', 'solana', 'pepe').
sensitivityNoAnomaly sensitivity multiplier (default: 1.0). Higher = more sensitive.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context: the tier-matched peer comparison methodology (microcaps vs microcaps) and discloses return values ('anomaly scores, risk level, and explanations') since no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently convey purpose and methodology, followed by a concrete example. Every sentence earns its place: first defines the function and algorithm, second explains the comparison logic and returns, third provides actionable example. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with 100% schema coverage and existing annotations, the description is complete. It compensates for the missing output schema by explicitly describing return values (scores, risk level, explanations) and explains the WaveGuard methodology sufficiently for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with coin_id and sensitivity fully documented in the schema. The description provides an example ('solana') which aids understanding of coin_id format, but does not add semantic meaning beyond what the schema already provides. Baseline 3 appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Scan' with resource 'token' and clearly states the analysis type ('WaveGuard physics-based anomaly detection'). It distinguishes from siblings like 'rug_check' by emphasizing 'TIER-MATCHED peers' comparison and anomalous behavior detection rather than scam detection or trade validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context that this is for detecting anomalous market behavior using physics-based detection against tier-matched peers. The example ('scan solana') clarifies usage. However, it lacks explicit guidance on when to prefer this over 'rug_check' or 'health' siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cryptoguard_validate_tradeA
Read-onlyIdempotent
Inspect

Validate a crypto trade BEFORE execution. Returns a verdict: PROCEED, CAUTION, or BLOCK. Runs 5 checks: peer anomaly scan via WaveGuard physics engine, self-history comparison, rug pull risk assessment, CEX/DEX price cross-check, and concentration risk analysis. Accepts token name, symbol, or contract address.

Example: validate buying $500 of PEPE before executing.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainNoBlockchain for contract address resolution (ethereum, solana, base, bsc, polygon, avalanche, arbitrum).
tokenYesToken to validate. Can be a name (bitcoin), symbol (BTC), or contract address (0x...).
actionNoTrade action type (default: buy).
amount_usdNoTrade amount in USD for concentration risk analysis.
pair_addressNoDEX pair address for rug pull check.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds significant value by disclosing the ternary return verdicts (PROCEED, CAUTION, BLOCK) and detailing the 5 internal validation checks (including the 'WaveGuard physics engine'), which helps the agent understand the tool's decision logic beyond the safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exceptionally well-structured: execution timing → return values → methodology (5 checks) → input flexibility → example. Every sentence conveys unique information with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters and no output schema, the description adequately compensates by documenting return values (PROCEED/CAUTION/BLOCK) and providing an example. It could be improved by noting behavior for unknown tokens (leveraging openWorldHint=true) or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description enhances this by mapping parameters to specific checks: amount_usd is linked to 'concentration risk analysis' and pair_address to 'rug pull check,' providing semantic context that the schema alone doesn't convey.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb ('Validate') + resource ('crypto trade') and explicitly scopes the tool to 'BEFORE execution.' It distinguishes from execution tools and siblings by enumerating 5 specific checks performed (peer anomaly, rug pull, CEX/DEX cross-check, etc.), clearly contrasting with narrower siblings like cryptoguard_rug_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear temporal context ('BEFORE execution') and includes a concrete example ('validate buying $500 of PEPE'). However, it lacks explicit guidance on when to use this tool versus siblings like cryptoguard_validate_trade_plus or cryptoguard_rug_check (e.g., when comprehensive vs. spot-checking is needed).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cryptoguard_validate_trade_plusA
Read-onlyIdempotent
Inspect

Premium stateless decision bundle for one-shot trade gating. Caller provides full context (training/test + optional what-if blocks) and receives verdict, risk score, instability/mechanism signals, and recommendation.

ParametersJSON Schema
NameRequiredDescriptionDefault
testYesTrade candidate sample to evaluate.
sequenceNoOptional ordered sequence for trajectory/regime checks.
trainingYesReference normal samples.
field_levelNo0=real field, 1=complex field (default: 1).
sensitivityNoAnomaly sensitivity (default: 1.0).
intervention_testsNoOptional intervention variants for mechanism probing.
intervention_labelsNoOptional labels for intervention tests.
counterfactual_testsNoOptional what-if variants for sensitivity analysis.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety/idempotency; description adds valuable context by enumerating return values (verdict, risk score, instability/mechanism signals, recommendation) which compensates for missing output_schema, and confirms 'stateless' nature aligning with idempotentHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two densely packed sentences with zero waste. First sentence establishes tool identity and nature; second sentence covers input requirements and output guarantees. Every clause conveys distinct information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 8-parameter tool with ML-style inputs and no output schema, description successfully explains the core paradigm and return structure. Minor gaps remain (e.g., specific enum semantics for field_level, error conditions), but coverage is strong given the constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% coverage (baseline 3), description adds conceptual grouping that aids understanding: 'training/test' paradigm explains the required parameters' relationship, and 'what-if blocks' maps to intervention/counterfactual test parameters, providing semantic coherence.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb+resource ('trade gating') and distinguishes from sibling 'validate_trade' by emphasizing 'Premium', 'Plus' features (what-if blocks, training/test paradigm), and comprehensive output bundle (verdict, risk score, signals).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through 'one-shot trade gating' and 'full context' requirements, but lacks explicit guidance on when to use this 'Plus' version versus the standard 'validate_trade' sibling or other tools like 'counterfactual_trade'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources