Skip to main content
Glama

BaseGuard + WalletGuard

Ownership verified

Server Details

BaseGuard + WalletGuard provides deterministic on-chain safety verdicts for AI agents — check any token for rug risk, LP lock status, and deployer history, or profile any wallet for bot activity, whale classification, and rug participation, across Base, Ethereum, Arbitrum, and Solana.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Every tool has a clearly distinct purpose with no ambiguity. The three token safety tools are explicitly tiered (lite, standard, pro) with clear scopes, and the two wallet tools are similarly differentiated (quick vs. full). Descriptions explicitly guide users on when to use each tool, preventing misselection.

Naming Consistency5/5

Tool names follow a perfectly consistent verb_noun pattern with clear modifiers (e.g., check_token_safety, check_token_safety_lite, check_token_safety_pro, check_wallet, check_wallet_quick). All use snake_case and the same root verbs, making the set highly predictable and readable.

Tool Count5/5

With 5 tools, the server is well-scoped for its security analysis domain. Each tool earns its place by covering distinct use cases (e.g., quick checks vs. full analyses, token vs. wallet focus), avoiding bloat while providing comprehensive functionality.

Completeness5/5

The tool surface offers complete coverage for token and wallet safety analysis, with no obvious gaps. It includes tiered options for both token and wallet checks (lite/standard/pro and quick/full), supporting workflows from rapid screening to in-depth profiling, and explicitly handles caching and pricing where relevant.

Available Tools

5 tools
check_token_safetyA
Read-only
Inspect

Full token safety analysis. Returns a risk score (0-100, lower is safer), a SAFE / CAUTION / AVOID recommendation, a confidence level, and three sub-checks: deployer wallet age in days, whether the liquidity pool is locked, and top-10 holder concentration percentage. Use this before trading or listing any token. Supports Base, Ethereum, Arbitrum, and Solana. Results are cached for 4 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainIdNoBlockchain network the token is deployed on. Defaults to Base if omitted.base
contractAddressYesToken contract address to analyse. Use 0x-prefixed hex for EVM chains (Base, Ethereum, Arbitrum) or base58 for Solana.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=true, covering safety and data scope. The description adds valuable behavioral context beyond annotations: it specifies the tool is for 'token safety analysis' (purpose), mentions caching for 4 hours (performance/limitation), and lists supported blockchains (scope). No contradictions with annotations exist, and the added details enhance understanding of the tool's operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with no wasted words. It front-loads key information (purpose and outputs), followed by usage guidance, supported chains, and caching details. Each sentence adds value, making it efficient for an AI agent to parse and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (safety analysis with multiple outputs), annotations cover read-only and open-world aspects, but there is no output schema. The description compensates by detailing the return values (risk score, recommendation, confidence, sub-checks), which is crucial for understanding results. It could improve by specifying error handling or limitations, but it's largely complete for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters (chainId and contractAddress). The description does not add significant parameter semantics beyond the schema, as it focuses on outputs and usage. However, it implies the contractAddress parameter is central to the analysis, reinforcing its importance. Baseline score of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Full token safety analysis' with specific outputs (risk score, recommendation, confidence level, three sub-checks). It distinguishes from siblings by specifying it's the 'full' analysis, implying check_token_safety_lite and check_token_safety_pro are variants with different scopes. The verb 'analyze' and resource 'token' are explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this before trading or listing any token' specifies when to use it. It implies alternatives by mentioning 'full' analysis, suggesting lite/pro versions might be for different contexts. The supported chains (Base, Ethereum, Arbitrum, Solana) and caching behavior (4 hours) help determine applicability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_token_safety_liteA
Read-only
Inspect

Fast token pre-trade check — deployer wallet age and LP lock status only, no holder concentration analysis. Typically completes in under 3 seconds. Returns risk score, SAFE / CAUTION / AVOID recommendation, deployer age in days, and LP lock flag. Use this when screening many tokens quickly. For full holder analysis use check_token_safety instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainIdNoBlockchain network the token is deployed on. Defaults to Base if omitted.base
contractAddressYesToken contract address to check. Use 0x-prefixed hex for EVM chains or base58 for Solana.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies performance ('Typically completes in under 3 seconds'), output format ('Returns risk score, SAFE / CAUTION / AVOID recommendation, deployer age in days, and LP lock flag'), and functional limitations ('no holder concentration analysis'). Annotations cover read-only and open-world aspects, but the description enriches this with practical details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: it front-loads the core purpose, follows with key details (speed, output, usage guidance), and uses only essential sentences with zero wasted words, making it efficient for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, lack of output schema, and rich annotations, the description is nearly complete: it covers purpose, usage, behavior, and output details. However, it could slightly enhance completeness by mentioning error handling or specific constraints, though the annotations and schema provide solid foundation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters. The description does not add any parameter-specific information beyond what the schema provides, such as explaining how 'contractAddress' relates to the token check. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Fast token pre-trade check') and resources ('deployer wallet age and LP lock status'), while explicitly distinguishing it from its sibling 'check_token_safety' by noting it excludes 'holder concentration analysis'. This provides precise differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use this when screening many tokens quickly') and provides a clear alternative ('For full holder analysis use check_token_safety instead'), giving complete guidance on tool selection versus siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_token_safety_proA
Read-only
Inspect

Pro token safety analysis. Runs the full standard verdict (deployer age, LP lock, holder concentration) and adds two additional signals: (1) Farcaster social activity — casts mentioning the token address in the last 24h, including mention count, unique caster count, and keyword-based sentiment (positive/neutral/negative); (2) Holder growth velocity — estimated holder count now vs ~24h ago, growth rate percentage, and trend (accelerating/stable/declining). Base chain only for holder growth; all chains for Farcaster. Returns a proVerdict that may upgrade SAFE to CAUTION when holder growth is anomalous (>50% or <-30%) or Farcaster sentiment is negative. CAUTION and AVOID are never downgraded. Priced at $0.50 USDC via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainIdNoBlockchain network the token is deployed on. Defaults to Base if omitted.base
contractAddressYesToken contract address to analyse. Use 0x-prefixed hex for EVM chains (Base, Ethereum, Arbitrum) or base58 for Solana.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it explains the two additional analysis signals, details how verdicts can be upgraded (SAFE to CAUTION under specific conditions), clarifies verdict downgrade rules (CAUTION and AVOID never downgraded), specifies chain usage differences (base chain for holder growth, all chains for Farcaster), and discloses pricing ($0.50 USDC via x402). Annotations only indicate read-only and open-world hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose, followed by detailed signal explanations and behavioral rules. Some sentences could be more concise (e.g., the pricing information might be separated), but overall it's well-structured with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple analysis components, verdict logic, chain-specific behavior, pricing) and lack of output schema, the description provides substantial context about what the tool does and returns. However, it doesn't fully detail the output structure (e.g., format of proVerdict) or error handling, leaving some gaps for an agent to understand exact responses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete parameter documentation. The description doesn't add meaningful parameter semantics beyond what's in the schema, though it implies contractAddress is used for analysis across the described features. Baseline 3 is appropriate when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'Pro token safety analysis' with specific components: 'full standard verdict' and two additional signals (Farcaster social activity and holder growth velocity). It distinguishes from siblings by specifying 'Pro' analysis with enhanced features beyond basic safety checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'Pro' analysis and pricing information, suggesting this is for advanced users needing comprehensive safety checks. However, it doesn't explicitly state when to choose this tool over siblings like 'check_token_safety' or 'check_token_safety_lite', nor does it provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_walletA
Read-only
Inspect

Full wallet intelligence profile for any EVM address. Returns a CLEAN / SUSPICIOUS / FLAGGED verdict, composite risk score (0-100, lower is safer), whale/mid/retail classification by holdings, bot likelihood score (0-100), contract deploy count as a rug-pull proxy, active chains, wallet age in days, transaction count, and risk flags such as NEW_WALLET, HIGH_BOT_SCORE, MIXER_INTERACTIONS, and SERIAL_DEPLOYER. Use this to evaluate token deployers, counterparties, or airdrop claimants. Cached 1 hour.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEVM wallet address to profile. Must be a 0x-prefixed 42-character hex address.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true and openWorldHint=true, indicating safe read operations and broad applicability. The description adds valuable behavioral context: 'Cached 1 hour' (rate/performance consideration) and details on what data is returned (e.g., risk scores, flags), which goes beyond annotations to help the agent understand output behavior and caching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific outputs and usage guidelines, all in two efficient sentences. Every sentence adds value: the first details the profile, and the second provides context and caching info, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple outputs like risk scores and flags) and lack of an output schema, the description does a good job explaining what is returned. However, it could be more complete by specifying exact output formats or error handling, though annotations cover safety and scope adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'address' fully documented in the schema. The description does not add any parameter-specific details beyond what the schema provides (e.g., no extra constraints or examples), so it meets the baseline for high schema coverage without compensating further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Full wallet intelligence profile for any EVM address' with specific outputs like verdict, risk score, classification, and risk flags. It distinguishes from siblings by focusing on wallet profiling rather than token safety checks, making it easy to differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this to evaluate token deployers, counterparties, or airdrop claimants.' It implies alternatives by naming sibling tools like 'check_token_safety' and 'check_wallet_quick', guiding the agent to choose based on the evaluation target (wallet vs. token) and depth needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_wallet_quickA
Read-only
Inspect

Fast wallet check — wallet age in days and outgoing transaction count only. Typically completes in under 2 seconds. Returns a simple CLEAN / SUSPICIOUS verdict alongside age and tx count. Use this when confirming a wallet is not brand-new before a quick trust decision, such as filtering obvious sybils in an airdrop. For full risk profiling including bot score, classification, and rug history use check_wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesEVM wallet address to check. Must be a 0x-prefixed 42-character hex address.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true and openWorldHint=true, indicating safe, non-destructive operations. The description adds valuable behavioral context: 'Typically completes in under 2 seconds' (performance), 'Returns a simple CLEAN / SUSPICIOUS verdict alongside age and tx count' (output format and decision logic), and the specific use case for quick trust decisions. It does not contradict annotations and enhances understanding beyond the structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by performance details, return values, and usage guidelines. Every sentence adds value: the first defines scope, the second gives speed, the third specifies output, and the fourth provides clear usage context. No wasted words, and it's structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (one parameter, no output schema), the description is largely complete. It covers purpose, performance, output format, and usage guidelines. However, it lacks details on error handling or edge cases (e.g., invalid addresses), which would be helpful. Annotations provide safety hints, but the description could be slightly more comprehensive for a tool used in trust decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the address parameter fully documented in the schema. The description does not add any additional meaning or context about the parameter beyond what the schema provides (e.g., it doesn't explain why address is needed or how it's used). Baseline score of 3 is appropriate as the schema carries the full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fast wallet check — wallet age in days and outgoing transaction count only.' It specifies the exact metrics (age and tx count) and distinguishes itself from the sibling 'check_wallet' by emphasizing speed and limited scope. The verb 'check' with the resource 'wallet' is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this when confirming a wallet is not brand-new before a quick trust decision, such as filtering obvious sybils in an airdrop.' It also provides a clear alternative: 'For full risk profiling including bot score, classification, and rug history use check_wallet.' This gives direct guidance on use cases and sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources