CryptoGuard
Server Details
Per-transaction crypto trade validator for AI agents. Returns deterministic PROCEED / CAUTION / BLOCK verdicts using WaveGuard anomaly detection, history checks, and rug-pull risk analysis.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolscryptoguard_counterfactual_tradeARead-onlyIdempotentInspect
Stateless what-if sensitivity analysis. Submit baseline + counterfactual variants in one request and receive tipping-point guidance.
| Name | Required | Description | Default |
|---|---|---|---|
| training | Yes | Reference normal samples. | |
| base_test | Yes | Baseline sample under evaluation. | |
| field_level | No | 0=real field, 1=complex field (default: 1). | |
| sensitivity | No | Anomaly sensitivity (default: 1.0). | |
| counterfactual_tests | Yes | What-if variants of the baseline sample. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish the operation is read-only, idempotent, and non-destructive. The description adds value by confirming 'Stateless' behavior and explaining the analytical output ('tipping-point guidance'), which compensates partially for the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. It front-loads the key concept ('Stateless what-if sensitivity analysis') and immediately follows with input/output expectations, making every word earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description mentions 'tipping-point guidance' to hint at the return value, the absence of an output schema leaves the actual data structure and content unexplained. For a 5-parameter analytical tool, additional detail on the sensitivity analysis results would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description maps 'baseline' to base_test and 'counterfactual variants' to counterfactual_tests, providing conceptual context, but does not add syntax, format details, or explain the training parameter's role beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'what-if sensitivity analysis' and processes 'counterfactual variants,' distinguishing it from the validation-focused siblings (validate_trade, validate_trade_plus). However, it lacks an explicit comparison statement naming alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through terms like 'what-if' and 'tipping-point guidance,' suggesting use for scenario testing and threshold analysis. It does not explicitly state prerequisites, when-not-to-use conditions, or direct comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cryptoguard_healthARead-onlyIdempotentInspect
Check CryptoGuard API health, version, and service status. No payment required. Use this to verify the service is running.
| Name | Required | Description | Default |
|---|---|---|---|
| verbose | No | Return detailed health info including uptime and version details (default: false). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly/idempotent/destructive=false. The description adds valuable behavioral context not in annotations: 'No payment required' alerts the agent this is a free-tier operation (critical for crypto/financial tools where siblings likely incur costs). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences total, each earning its place: purpose statement, cost disclosure, and usage directive. Front-loaded with the core action, zero redundancy, and appropriately sized for a single-parameter health check tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple health endpoint with one optional boolean parameter and no output schema, the description is complete. It covers functionality, cost model, and verification use case. Annotations handle safety properties, so no additional behavioral disclosure is required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (verbose parameter fully documented). The description adds no parameter-specific information, but with complete schema coverage, baseline 3 is appropriate—the schema carries the semantic load adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Check') and clearly identifies the resource ('CryptoGuard API health, version, and service status'). It effectively distinguishes from siblings like cryptoguard_validate_trade and cryptoguard_scan_token by being explicitly a meta-service health endpoint rather than an operational crypto tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States explicit use case ('Use this to verify the service is running') and provides critical financial context ('No payment required') that helps distinguish it from paid trading operations. Lacks explicit 'when not to use' contrasts with specific siblings, but the domain separation (health vs trading/scanning) is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cryptoguard_rug_checkARead-onlyIdempotentInspect
Assess rug pull risk for a specific DEX trading pair. Scores 6 risk factors (0-100): liquidity depth, pair age, volume/liquidity ratio, price action, buy/sell imbalance, and metadata.
Example: check if a new Solana pair is a potential rug pull.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | Blockchain (solana, ethereum, base, bsc). | |
| pair_address | Yes | DEX pair contract address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent hints, so the description adds valuable behavioral context by detailing the 6 specific risk factors scored (liquidity depth, pair age, etc.) and the 0-100 scoring methodology. It does not contradict annotations (consistent with readOnly assessment).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: the first defines purpose and methodology, the second provides concrete usage context. Every sentence earns its place with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema and lack of output schema, the description adequately covers the tool's function by explaining the 6 risk factors and scoring range. Rich annotations cover safety profiles. Minor gap: could specify return structure (object vs aggregate score).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for both parameters, the baseline is 3. The description reinforces the example ('Solana pair') aligning with the chain parameter, but does not add syntax details, format constraints, or parameter relationships beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Assess rug pull risk for a specific DEX trading pair' with specific verb (assess) and resource. It distinguishes from siblings like 'scan_token' or 'validate_trade' by focusing specifically on rug pull risk scoring rather than general scanning or trade validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage through the example ('check if a new Solana pair is a potential rug pull'), suggesting when to use it. However, it lacks explicit when-not-to-use guidance or differentiation from sibling tools like 'cryptoguard_scan_token' which might also analyze token safety.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cryptoguard_scan_tokenARead-onlyIdempotentInspect
Scan a single token for anomalous market behavior using WaveGuard physics-based anomaly detection. Compares the token to TIER-MATCHED peers (microcaps vs microcaps, large-caps vs large-caps). Returns anomaly scores, risk level, and explanations.
Example: scan 'solana' to check if its metrics are unusual.
| Name | Required | Description | Default |
|---|---|---|---|
| coin_id | Yes | CoinGecko coin ID (e.g., 'bitcoin', 'solana', 'pepe'). | |
| sensitivity | No | Anomaly sensitivity multiplier (default: 1.0). Higher = more sensitive. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context: the tier-matched peer comparison methodology (microcaps vs microcaps) and discloses return values ('anomaly scores, risk level, and explanations') since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently convey purpose and methodology, followed by a concrete example. Every sentence earns its place: first defines the function and algorithm, second explains the comparison logic and returns, third provides actionable example. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with 100% schema coverage and existing annotations, the description is complete. It compensates for the missing output schema by explicitly describing return values (scores, risk level, explanations) and explains the WaveGuard methodology sufficiently for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with coin_id and sensitivity fully documented in the schema. The description provides an example ('solana') which aids understanding of coin_id format, but does not add semantic meaning beyond what the schema already provides. Baseline 3 appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Scan' with resource 'token' and clearly states the analysis type ('WaveGuard physics-based anomaly detection'). It distinguishes from siblings like 'rug_check' by emphasizing 'TIER-MATCHED peers' comparison and anomalous behavior detection rather than scam detection or trade validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context that this is for detecting anomalous market behavior using physics-based detection against tier-matched peers. The example ('scan solana') clarifies usage. However, it lacks explicit guidance on when to prefer this over 'rug_check' or 'health' siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cryptoguard_searchARead-onlyIdempotentInspect
Search for a token's CoinGecko coin ID by name, symbol, or contract address. Use this first if you're unsure of the correct coin_id for scan_token or validate_trade.
Example: search 'pepe' to find the correct coin ID.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Token name, symbol, or contract address to search. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the operation as readOnlyHint=true and idempotentHint=true. The description adds valuable workflow context (resolving IDs for specific sibling tools) and implies the return value (CoinGecko coin ID), which compensates for the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short lines: purpose statement, usage guideline, and concrete example. Information is front-loaded with no redundancy or wasted words. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with one parameter, 100% schema coverage, and good annotations, the description is complete. It explains the output conceptually (returns coin ID) and maps the tool to its ecosystem (sibling dependencies), which suffices given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents the 'query' parameter. The description includes a helpful usage example ('search 'pepe''), but this demonstrates tool usage rather than adding semantic detail about the parameter itself that the schema doesn't already cover.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Search') and clearly identifies the resource (token's CoinGecko coin ID) and search methods (name, symbol, or contract address). It effectively distinguishes itself from siblings by stating it finds the 'coin_id' needed for 'scan_token' or 'validate_trade'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Use this first if you're unsure of the correct coin_id for scan_token or validate_trade.' This clearly signals when to use this tool versus its siblings and establishes it as a prerequisite step in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cryptoguard_validate_tradeARead-onlyIdempotentInspect
Validate a crypto trade BEFORE execution. Returns a verdict: PROCEED, CAUTION, or BLOCK. Runs 5 checks: peer anomaly scan via WaveGuard physics engine, self-history comparison, rug pull risk assessment, CEX/DEX price cross-check, and concentration risk analysis. Accepts token name, symbol, or contract address.
Example: validate buying $500 of PEPE before executing.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Blockchain for contract address resolution (ethereum, solana, base, bsc, polygon, avalanche, arbitrum). | |
| token | Yes | Token to validate. Can be a name (bitcoin), symbol (BTC), or contract address (0x...). | |
| action | No | Trade action type (default: buy). | |
| amount_usd | No | Trade amount in USD for concentration risk analysis. | |
| pair_address | No | DEX pair address for rug pull check. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true. The description adds significant value by disclosing the ternary return verdicts (PROCEED, CAUTION, BLOCK) and detailing the 5 internal validation checks (including the 'WaveGuard physics engine'), which helps the agent understand the tool's decision logic beyond the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Exceptionally well-structured: execution timing → return values → methodology (5 checks) → input flexibility → example. Every sentence conveys unique information with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters and no output schema, the description adequately compensates by documenting return values (PROCEED/CAUTION/BLOCK) and providing an example. It could be improved by noting behavior for unknown tokens (leveraging openWorldHint=true) or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description enhances this by mapping parameters to specific checks: amount_usd is linked to 'concentration risk analysis' and pair_address to 'rug pull check,' providing semantic context that the schema alone doesn't convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Validate') + resource ('crypto trade') and explicitly scopes the tool to 'BEFORE execution.' It distinguishes from execution tools and siblings by enumerating 5 specific checks performed (peer anomaly, rug pull, CEX/DEX cross-check, etc.), clearly contrasting with narrower siblings like cryptoguard_rug_check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear temporal context ('BEFORE execution') and includes a concrete example ('validate buying $500 of PEPE'). However, it lacks explicit guidance on when to use this tool versus siblings like cryptoguard_validate_trade_plus or cryptoguard_rug_check (e.g., when comprehensive vs. spot-checking is needed).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cryptoguard_validate_trade_plusARead-onlyIdempotentInspect
Premium stateless decision bundle for one-shot trade gating. Caller provides full context (training/test + optional what-if blocks) and receives verdict, risk score, instability/mechanism signals, and recommendation.
| Name | Required | Description | Default |
|---|---|---|---|
| test | Yes | Trade candidate sample to evaluate. | |
| sequence | No | Optional ordered sequence for trajectory/regime checks. | |
| training | Yes | Reference normal samples. | |
| field_level | No | 0=real field, 1=complex field (default: 1). | |
| sensitivity | No | Anomaly sensitivity (default: 1.0). | |
| intervention_tests | No | Optional intervention variants for mechanism probing. | |
| intervention_labels | No | Optional labels for intervention tests. | |
| counterfactual_tests | No | Optional what-if variants for sensitivity analysis. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety/idempotency; description adds valuable context by enumerating return values (verdict, risk score, instability/mechanism signals, recommendation) which compensates for missing output_schema, and confirms 'stateless' nature aligning with idempotentHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two densely packed sentences with zero waste. First sentence establishes tool identity and nature; second sentence covers input requirements and output guarantees. Every clause conveys distinct information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex 8-parameter tool with ML-style inputs and no output schema, description successfully explains the core paradigm and return structure. Minor gaps remain (e.g., specific enum semantics for field_level, error conditions), but coverage is strong given the constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema has 100% coverage (baseline 3), description adds conceptual grouping that aids understanding: 'training/test' paradigm explains the required parameters' relationship, and 'what-if blocks' maps to intervention/counterfactual test parameters, providing semantic coherence.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb+resource ('trade gating') and distinguishes from sibling 'validate_trade' by emphasizing 'Premium', 'Plus' features (what-if blocks, training/test paradigm), and comprehensive output bundle (verdict, risk score, signals).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'one-shot trade gating' and 'full context' requirements, but lacks explicit guidance on when to use this 'Plus' version versus the standard 'validate_trade' sibling or other tools like 'counterfactual_trade'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!