mcp-server
Server Details
Verify crypto wallets and generate proof of funds letters for buying real estate with crypto.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- realopengroup/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.5/5 across 17 of 17 tools scored. Lowest: 1/5.
Most tools have distinct purposes, such as add_wallet for wallet submission, verify_wallet_signature for ownership verification, and get_wallet_summary for wallet status. However, tools like get_faq, get_how_it_works, and get_snippets could be ambiguous as they all seem to provide informational content without clear differentiation in their descriptions, potentially causing confusion.
The tool names follow a consistent verb_noun pattern throughout, such as add_wallet, get_account_status, and start_identity_verification, which is clear and predictable. There are minor deviations like logout (a single verb) and generate_branded_graphic (which uses 'generate' instead of 'get' or 'add'), but overall the naming is highly consistent and readable.
With 17 tools, the count is borderline high for a server focused on identity verification and wallet management, as it includes several informational tools (e.g., get_faq, get_how_it_works) that might be consolidated. While not excessive, it feels slightly heavy and could benefit from streamlining to improve focus and usability.
The tool set covers core workflows well, including wallet addition, verification (via signature or transfer), identity verification, and status checks, with no major gaps in the primary domain. However, there are minor gaps, such as the lack of tools for updating or deleting wallets, which agents might need to work around, but the surface is largely complete for the stated purpose.
Available Tools
17 toolsadd_walletAInspect
Submit a public wallet address for verification. Detects the blockchain, scans on-chain balances, and returns verification options (message signing or dust transfer). IMPORTANT: When presenting results, provide copy buttons for the verification message (the user must paste this exact string into their wallet to sign), any deposit address, and the exact transfer amount. These values must be copied precisely — do not let the user retype them.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | The public wallet address to add |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate it's not read-only (readOnlyHint: false), open-world (openWorldHint: true), and non-destructive (destructiveHint: false). The description adds valuable context beyond this: it discloses that the tool performs blockchain detection and balance scanning, returns verification options, and includes critical behavioral instructions for presenting results (e.g., providing copy buttons, ensuring precise copying). This enriches the agent's understanding of the tool's behavior and output handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with two sentences that efficiently convey purpose, process, and critical instructions. Every sentence adds value: the first explains what the tool does, and the second provides essential usage guidance for results presentation, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (involving blockchain detection, balance scanning, and verification options), no output schema, and rich annotations, the description is mostly complete. It covers the tool's purpose, behavior, and result-handling instructions. However, it could be more complete by explicitly detailing the verification options or error cases, though the annotations provide some safety context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with one parameter 'address' fully documented in the schema. The description adds minimal semantics beyond the schema by implying the address is for verification and must be public, but does not provide additional details like format constraints or examples. Baseline 3 is appropriate as the schema carries the primary burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Submit a public wallet address for verification. Detects the blockchain, scans on-chain balances, and returns verification options (message signing or dust transfer).' It distinguishes from siblings like 'verify_wallet_signature' and 'verify_wallet_transfer' by focusing on initial submission and verification setup rather than execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to initiate wallet verification by submitting an address. It implies usage by describing the process and results, but does not explicitly state when not to use it or name alternatives among siblings, such as when to use 'verify_wallet_signature' directly instead.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_branded_graphicDInspect
| Name | Required | Description | Default |
|---|---|---|---|
| style | No | Visual style of the generated image | photo_realistic |
| format | Yes | Output format controlling dimensions and logo placement | |
| prompt | Yes | What the graphic should depict — be specific about the scene, mood, and composition |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_proof_of_fundsAInspect
Generate a proof-of-funds letter (PDF) for the authenticated user. Requires completed identity verification and at least one verified wallet. Returns a download link valid for 30 days. When presenting results, provide a prominent clickable download link and a copy button for the URL.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Requested proof-of-funds amount | |
| currency | No | Currency for the letter (default: USD) | USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is not read-only, is open-world, and non-destructive. The description adds valuable behavioral context beyond annotations: it specifies the output format ('PDF'), the return type ('download link'), validity period ('valid for 30 days'), and presentation instructions ('prominent clickable download link and a copy button'). This enriches the agent's understanding of the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by prerequisites, return details, and presentation instructions. Each sentence adds essential information without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema), the description is mostly complete: it covers purpose, prerequisites, output format, and usage instructions. However, it lacks details on error conditions (e.g., what happens if prerequisites aren't met) or how the amount parameter interacts with verified wallets, leaving minor gaps in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters. The description does not add further meaning to parameters beyond what the schema provides, such as explaining how 'amount' relates to verified wallets or default behaviors. However, the baseline score of 3 is appropriate given the comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('generate a proof-of-funds letter'), the resource type ('PDF'), and the target ('for the authenticated user'). It distinguishes this from sibling tools like 'get_wallet_summary' or 'search_assets' by focusing on document generation rather than data retrieval or verification processes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states prerequisites ('requires completed identity verification and at least one verified wallet'), providing clear context for when to use this tool. However, it does not specify when NOT to use it or name alternatives among sibling tools, such as whether 'get_account_status' might be a preliminary check.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_statusARead-onlyIdempotentInspect
Returns the authenticated user's RealOpen account status including identity verification, wallet summary, and proof-of-funds eligibility.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds context about what information is returned (e.g., identity verification status), which is useful but does not disclose additional behavioral traits like rate limits, authentication requirements, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and included data points. It is front-loaded with the main action and avoids any unnecessary words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema) and rich annotations covering safety and idempotency, the description provides sufficient context by detailing the specific account status information returned. However, it does not explain the return format or potential error conditions, leaving minor gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so there is no need for parameter information in the description. The description appropriately focuses on the tool's purpose and output without redundant parameter details, earning a baseline score of 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('authenticated user's RealOpen account status'), with explicit details on what information is included ('identity verification, wallet summary, and proof-of-funds eligibility'). It distinguishes itself from siblings like 'get_wallet_summary' by covering broader account status beyond just wallet details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying it returns 'the authenticated user's' status, suggesting it should be used for checking one's own account rather than others'. However, it does not explicitly state when to use this tool versus alternatives like 'get_wallet_summary' or 'verify_wallet_signature', nor does it provide exclusions or prerequisites beyond authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_faqDRead-onlyIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Search keyword or question to find relevant FAQ entries | |
| category | No | Filter by FAQ category (e.g. "Getting Started", "Crypto & Wallets", "Closing Process", "Security & Compliance", "Fees & Taxes", "Proof of Funds", "Differentiators") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fee_structureDRead-onlyIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_how_it_worksDRead-onlyIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
| perspective | No | Which perspective to return: buyer (step-by-step), agent (seller/listing-side experience), overview, or all | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_referral_linkDIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
| path | No | Page path on realopen.com, e.g. / or /how-it-works or /partners | / |
| ref_id | No | The affiliate's referral ID (auto-filled if embedded in connector URL or identified via API key) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_service_areasDRead-onlyIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_snippetsDRead-onlyIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Filter by snippet tags (matches ANY) | |
| tone | No | Filter by snippet tone | |
| keyword | No | Free-text search on snippet text and title | |
| asset_id | No | Get snippets tied to a specific asset by ID | |
| platform | No | Filter by target platform |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supported_cryptoDRead-onlyIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_wallet_summaryBRead-onlyIdempotentInspect
Returns all wallets for the authenticated user with their balances, verification status, and proof-of-funds ceiling. When presenting results, provide copy buttons for wallet addresses and display crypto balances in a clear table format.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds that it returns data for the 'authenticated user' (implying auth needs) and specifies the scope ('all wallets'), which provides useful context beyond annotations. However, it doesn't mention rate limits, error conditions, or data freshness, leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first states the tool's purpose clearly, but the second ('When presenting results...') is prescriptive UI advice that doesn't belong in a tool description meant for AI agents. This wastes space and reduces conciseness. The structure is front-loaded with the core functionality, but the extraneous content lowers the score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should ideally explain return values more thoroughly. It lists data fields (balances, verification status, proof-of-funds ceiling) but doesn't detail formats or structure. With annotations covering safety and idempotency, and no parameters, it's minimally adequate but lacks depth for a tool returning complex wallet data, leaving the agent with incomplete context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is high. The description appropriately notes no parameters are needed, as it implicitly states it returns data for the authenticated user without requiring inputs. It adds no parameter-specific semantics, but this isn't needed given the empty schema, so it meets expectations without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns all wallets for the authenticated user with their balances, verification status, and proof-of-funds ceiling.' It specifies the verb ('returns'), resource ('wallets'), and key data fields. However, it doesn't explicitly differentiate from sibling tools like 'get_account_status' or 'search_assets' that might also return wallet-related information, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., authentication), exclusions, or comparisons to siblings like 'get_account_status' or 'search_assets'. The second sentence about presentation formatting is irrelevant to usage decisions, leaving the agent with no contextual help for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
logoutAInspect
Sign out of your RealOpen MCP session. Use this when the user wants to switch accounts or disconnect.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is not read-only and not destructive, but the description adds valuable context by specifying that it ends the session, which implies authentication changes and potential side effects like losing access until re-login. However, it doesn't detail rate limits or exact behavioral outcomes beyond session termination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with the core action and followed by usage guidance, with no wasted words or redundant information, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple, parameter-less tool with annotations covering safety aspects, the description is complete enough by explaining what it does and when to use it. However, without an output schema, it could briefly mention the expected result (e.g., confirmation of logout), but this is a minor gap given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately does not discuss parameters, as none exist, and focuses on the tool's purpose instead.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Sign out') and the resource ('your RealOpen MCP session'), distinguishing it from all sibling tools which are about account management, asset operations, or information retrieval rather than session termination.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly provides when to use this tool ('when the user wants to switch accounts or disconnect'), offering clear context and distinguishing it from alternatives like 'get_account_status' or other account-related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_assetsDRead-onlyIdempotentInspect
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Filter by one or more tag slugs (assets matching ANY tag are returned) | |
| keyword | No | Free-text keyword search across label and description | |
| category | No | Filter by asset category | |
| file_type | No | Filter by file type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Tool has no description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Tool has no description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has no description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Tool has no description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tool has no description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
start_identity_verificationAInspect
Start or restart identity verification (KYC). If a previous session exists that was incomplete, denied, or expired, this creates a new one. Returns a URL the user must open in their browser.
| Name | Required | Description | Default |
|---|---|---|---|
| force_new | No | Force a new verification session even if one is pending (use after denial or expiry) | |
| id_number_required | No | Whether to require an ID number on the document (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, openWorldHint=true, and destructiveHint=false, but the description adds valuable behavioral context beyond these: it explains that this creates a new session when previous ones are incomplete/denied/expired, and discloses the return format (a URL the user must open). This goes beyond what annotations provide about mutability and safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first explains the core functionality and when to use it, the second discloses the return format. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter mutation tool with no output schema, the description provides good coverage: purpose, usage context, and return format. However, it doesn't mention authentication requirements, rate limits, or what happens after the user opens the URL, leaving some behavioral aspects unspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 where schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Start or restart identity verification (KYC)'), identifies the resource (identity verification session), and distinguishes from siblings by focusing on KYC initiation rather than wallet operations or information retrieval tools like get_account_status or get_faq.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('If a previous session exists that was incomplete, denied, or expired, this creates a new one'), but doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_wallet_signatureAInspect
Submit a signed message to verify wallet ownership. The user must have signed the exact verification message provided by add_wallet. When collecting the signature from the user, remind them to paste the full signature hash from their wallet.
| Name | Required | Description | Default |
|---|---|---|---|
| signature | Yes | The signature hash produced by signing the verification message with your wallet | |
| wallet_id | Yes | Wallet UUID from the add_wallet response |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-destructive, open-world operation, which the description doesn't contradict. The description adds useful context about the prerequisite ('user must have signed the exact verification message from add_wallet') and user interaction guidance, but it doesn't disclose potential outcomes, error conditions, or authentication requirements beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first states the purpose and prerequisite, the second provides actionable user guidance. It's front-loaded with the core function and efficiently includes necessary details without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters, full schema coverage, and no output schema, the description is mostly complete. It covers the purpose, prerequisite, and user instructions well. However, it lacks details on what happens after verification (e.g., success/failure outcomes), which would be helpful given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds minimal extra meaning by linking 'wallet_id' to 'add_wallet response' and specifying that 'signature' is for 'verification message', but this mostly reinforces the schema. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Submit a signed message to verify wallet ownership'), identifies the resource ('wallet'), and distinguishes it from siblings by referencing the prerequisite 'add_wallet' tool. It goes beyond the name by explaining the verification process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool: after 'add_wallet' provides a verification message, and it includes a specific user instruction ('remind them to paste the full signature hash') that clarifies the context. It also implicitly distinguishes from other wallet-related tools by focusing on signature verification rather than creation or summary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_wallet_transferAInspect
Submit a transaction hash for the dust/test transfer verification method. The user must have sent the exact amount to the deposit address provided by add_wallet. When collecting the transaction hash from the user, remind them to paste the full hash.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet_id | Yes | Wallet UUID from the add_wallet response | |
| transaction_hash | Yes | The transaction hash of the dust transfer |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-destructive, open-world operation that is not read-only, suggesting it performs a verification that may update state. The description adds context about the verification method (dust/test transfer) and the requirement for an exact amount, but does not detail error conditions, rate limits, or authentication needs beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by essential prerequisites and a practical reminder. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (verification with prerequisites), lack of output schema, and rich annotations, the description is mostly complete. It covers purpose, usage context, and parameter semantics, but could benefit from details on return values or error handling to fully compensate for the missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds semantic context by linking wallet_id to add_wallet response and specifying transaction_hash is for a dust transfer, but does not provide additional syntax or format details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('submit a transaction hash for verification') and resource ('dust/test transfer verification method'), distinguishing it from siblings like verify_wallet_signature. It explicitly mentions the exact amount and deposit address from add_wallet, providing precise context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: after add_wallet has provided a deposit address and the user has sent the exact amount. It includes a reminder for collecting the transaction hash, but does not explicitly state when not to use it or name alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!