CIPHER x402 — Paid Solana & Crypto Tools
Server Details
8 CIPHER tools — Solana scan, breach check, Jito, FRED, Drift, repo health, more. x402-paid on Base.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cryptomotifs/cipher-x402-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 8 of 8 tools scored. Lowest: 3.5/5.
Each tool has a clearly distinct purpose targeting different domains: Solana finance (drift exposure, wallet scan, Jito tips), security (password breach, security audit), data retrieval (FRED series, GitHub health, premium chapters). No overlap exists—an agent can easily select the right tool based on the task.
All tools follow a consistent verb_noun or verb_noun_noun pattern (e.g., check_drift_exposure, get_premium_cipher_chapter, solana_wallet_scan). The naming is uniform across the set with no mixing of conventions, making it predictable and readable.
With 8 tools, the count is reasonable for a server offering paid crypto and utility tools. It covers multiple domains without being overwhelming, though the scope is broad (Solana, security, data, GitHub), which might feel slightly scattered but remains manageable.
The server lacks clear domain cohesion, making completeness hard to assess. For Solana tools, there's coverage for exposure, scanning, and tips, but no create/update/delete operations. For other areas like security or data, tools are isolated without lifecycle coverage, leaving gaps for agent workflows.
Available Tools
8 toolscheck_drift_exposureAInspect
Check a Solana wallet's exposure to Drift Protocol — open perp positions, collateral, unrealized PnL, liquidation risk distance. Priced at $0.01 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Base58 Solana wallet address. | |
| _payment | No | Optional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the dual behavior based on payment presence, the pricing model ($0.01 USDC on Base), the payment mechanism (x402 v2 authorization), and the two possible return types. However, it doesn't mention rate limits, error conditions, or authentication requirements beyond the payment mechanism.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted words. The first sentence establishes purpose, the second provides pricing and payment context, and the third explains the dual behavior. Every sentence earns its place by delivering essential information about the tool's operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a paid API tool with dual behaviors and no output schema, the description does well to explain the core functionality, payment mechanism, and behavioral logic. However, it lacks information about response formats, error handling, or what specific data fields are returned in each scenario, which would be helpful for a tool with this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds some value by explaining the behavioral implications of the '_payment' parameter (determines whether you get paid data vs. accept-list), but doesn't provide additional semantic context beyond what's already in the schema descriptions for either parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check a Solana wallet's exposure to Drift Protocol') and enumerates the exact resources examined (open perp positions, collateral, unrealized PnL, liquidation risk distance). It distinguishes itself from sibling tools like 'solana_wallet_scan' or 'solana_wallet_security_audit_rules' by focusing specifically on Drift Protocol exposure rather than general scanning or security auditing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: when checking Drift Protocol exposure for a Solana wallet. It clearly distinguishes between two usage scenarios based on the presence of the '_payment' parameter: with payment it returns paid response data, without payment it returns the 402 accept-list for signing. This eliminates ambiguity about alternative approaches within the same tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_password_breachAInspect
Check whether a password hash prefix (SHA-1, first 5 chars) appears in the HIBP breach corpus. k-anonymity, no plaintext passwords sent. Priced at $0.005 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.
| Name | Required | Description | Default |
|---|---|---|---|
| _payment | No | Optional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list. | |
| sha1_prefix | Yes | First 5 uppercase hex characters of the SHA-1 hash of the password (e.g. '21BD1'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does so effectively. It discloses key behavioral traits: pricing ($0.005 USDC on Base), payment mechanism (x402 v2 authorization), dual response behavior (breach results vs. accept-list), and security approach (k-anonymity, no plaintext passwords). It doesn't mention rate limits or error handling, but covers the essential operational characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted words. The first sentence establishes core functionality, the second explains pricing and payment mechanism, and the third clarifies the dual behavior based on payment presence. Every sentence earns its place by providing essential operational information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides excellent context about the dual-mode operation, payment requirements, and security approach. It doesn't describe the format of breach results or accept-list responses, which would be helpful given the lack of output schema. However, it covers the essential operational context thoroughly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds meaningful context beyond the schema: it explains that '_payment' unlocks paid responses and that its absence triggers accept-list return, and clarifies that sha1_prefix represents 'first 5 uppercase hex characters' for password hash checking. This provides valuable semantic understanding of how parameters affect tool behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check whether a password hash prefix appears in the HIBP breach corpus'), identifies the resource (HIBP breach corpus), and specifies the technical method (k-anonymity with SHA-1 first 5 chars). It distinguishes itself from sibling tools by focusing on password breach checking rather than drift exposure, financial data, or wallet security.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool: for checking password breach status via hash prefix. It clearly distinguishes between paid and unpaid usage scenarios: with payment returns breach results, without payment returns the 402 accept-list for signing. No sibling tools offer similar functionality, making alternatives clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fred_macro_seriesAInspect
Fetch a Federal Reserve Economic Data (FRED) series by ID — e.g. 'DGS10' (10Y yield), 'WALCL' (Fed balance sheet), 'T10Y2Y' (yield curve). Returns cleaned latest observations. Priced at $0.005 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.
| Name | Required | Description | Default |
|---|---|---|---|
| _payment | No | Optional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list. | |
| series_id | Yes | FRED series ID (e.g. 'DGS10', 'WALCL', 'T10Y2Y', 'DFF'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It explains the payment mechanism, pricing details, the conditional behavior based on the '_payment' parameter, what data is returned ('cleaned latest observations'), and the 402 accept-list fallback behavior. This provides complete transparency about how the tool operates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted words. It starts with the core purpose, provides examples, explains the return value, mentions pricing, and clearly documents the conditional payment behavior - all in three well-organized sentences that each serve a distinct purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides complete context. It explains what the tool does, how to use it, the payment mechanism, pricing, conditional behavior, and return values. Given the complexity of the payment system, this level of detail is appropriate and comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema description coverage is 100%, the description adds significant value beyond the schema by explaining the practical implications of the '_payment' parameter - specifically what happens when it's present vs. absent, and that it unlocks paid responses. It also provides real-world examples of series_id values that help users understand what to pass.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Fetch') and resource ('FRED series by ID'), provides concrete examples of series IDs, and distinguishes it from sibling tools by focusing on economic data retrieval. It goes beyond the name/title to explain what the tool actually does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (to fetch FRED economic data series) and includes clear conditional logic for the payment parameter - specifying exactly what happens with and without the '_payment' argument. It also mentions pricing details which helps users understand the cost context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
github_repo_healthAInspect
Score a GitHub repository's health — commit cadence, issue-close latency, contributor diversity, CI green rate, release frequency. Returns 0-100 health score with per-factor breakdown. Priced at $0.02 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.
| Name | Required | Description | Default |
|---|---|---|---|
| repo | Yes | Repo name (e.g. 'solana'). | |
| owner | Yes | GitHub org or user (e.g. 'solana-labs'). | |
| _payment | No | Optional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool returns a 0-100 health score with per-factor breakdown, requires payment via a signed authorization for full functionality, and defaults to returning an accept-list without payment. It mentions pricing ($0.02 USDC on Base) and the payment mechanism (x402 v2), though it could add more on error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and metrics, then detailing payment requirements. Every sentence adds value: the first explains what the tool does, the second specifies the return format and pricing, and the third clarifies usage based on payment. It could be slightly more concise by combining some details, but overall it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a paid tool with conditional behavior and no output schema, the description is largely complete. It covers the purpose, metrics, return format, payment mechanism, and usage scenarios. However, it lacks details on error cases (e.g., invalid repository inputs) and does not fully describe the output structure beyond '0-100 health score with per-factor breakdown,' which could be more explicit without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters (owner, repo, _payment). The description adds context for the '_payment' parameter by explaining its effect on tool behavior (unlocks paid response vs. returns accept-list), but does not provide additional meaning for 'owner' and 'repo' beyond what the schema states. This meets the baseline of 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Score a GitHub repository's health') and resources ('GitHub repository'), listing the exact metrics analyzed (commit cadence, issue-close latency, contributor diversity, CI green rate, release frequency). It distinguishes itself from sibling tools by focusing on repository health scoring rather than security audits, wallet scans, or other functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool: to assess repository health with specific metrics. It clearly distinguishes between paid and unpaid usage scenarios, specifying that without the '_payment' argument, it returns an accept-list for signing, and with it, it returns the health score. This directly addresses when to use the tool versus not using it based on payment requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jito_tip_calculatorAInspect
Compute an expected-value-maximizing Jito tip for a Solana arbitrage bundle. Inputs: pool_depth (USD), expected_profit (USD), slot_probability [0..1]. Returns tip lamports + EV breakdown. Priced at $0.01 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.
| Name | Required | Description | Default |
|---|---|---|---|
| _payment | No | Optional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list. | |
| slot_prob | Yes | Probability the leader slot is a Jito validator [0..1]. | |
| pool_depth | Yes | Target pool depth in USD. | |
| expected_profit | Yes | Gross expected profit of the bundle in USD. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly explains the tool's behavior: it computes a tip based on inputs, returns tip lamports and EV breakdown, requires payment for full response, and returns a 402 accept-list without payment. It also specifies pricing details and authorization requirements, covering mutation, authentication, and response format aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by essential details in a logical flow: inputs, returns, payment mechanism, and alternative behavior. Every sentence adds critical information without redundancy, making it highly efficient and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (involving financial calculations, payment, and Solana-specific context) and the absence of annotations and output schema, the description is complete. It covers purpose, inputs, outputs, behavioral nuances, payment requirements, and error/alternative responses, providing all necessary context for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the purpose of the parameters ('inputs: pool_depth (USD), expected_profit (USD), slot_probability [0..1]') and clarifying the '_payment' argument's role in unlocking paid responses, which enhances understanding beyond the schema's technical definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('compute') and resource ('Jito tip for a Solana arbitrage bundle') with precise scope ('expected-value-maximizing'). It distinguishes this tool from all sibling tools, which are unrelated to Solana arbitrage or tip calculation, making the purpose immediately distinct and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('to compute an expected-value-maximizing Jito tip') and provides clear context for when not to use it (e.g., for other Solana operations like wallet scans or unrelated tasks like password checks). It also details the payment requirement and alternative behavior if payment is absent, offering comprehensive usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
solana_wallet_scanAInspect
Scan a Solana wallet for portfolio value, dust accounts, stale stake accounts, and low-liquidity position warnings. Returns findings + referral CTAs. Priced at $0.01 USDC on Base (x402). Pass a signed x402 v2 authorization as the '_payment' argument to unlock the paid response. Without it, the tool returns the 402 accept-list for your wallet to sign.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Base58 Solana wallet address (32-44 chars). | |
| _payment | No | Optional. Signed x402 v2 X-PAYMENT header value (base64-encoded EIP-3009 authorization). If present, forwarded upstream; if absent, tool returns the 402 accept-list. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does so well: it discloses payment requirements (priced at $0.01 USDC), authorization needs (signed x402 v2 authorization), and behavioral differences based on '_payment' argument (paid response vs. 402 accept-list). It doesn't mention rate limits or other constraints, but covers key operational traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with core functionality, followed by payment details and behavioral conditions. Every sentence adds value: first explains what the tool does, second covers pricing and authorization, third clarifies the payment argument's effect. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well by explaining the tool's purpose, usage, payment model, and behavioral outcomes. It could improve by detailing the exact structure of returned findings or error cases, but it's largely complete for a tool with 2 parameters and clear operational logic.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds some context by explaining the purpose of '_payment' (to unlock paid response or return 402 accept-list), but doesn't provide additional meaning beyond what the schema already documents for parameters like 'address' format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('scan', 'returns') and resources ('Solana wallet'), detailing what it analyzes (portfolio value, dust accounts, stale stake accounts, low-liquidity position warnings) and what it returns (findings + referral CTAs). It distinguishes from siblings like 'solana_wallet_security_audit_rules' by focusing on scanning rather than auditing rules.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (to scan a wallet for specific issues) and provides clear alternatives based on payment: with '_payment' argument for paid response, without it for 402 accept-list. It also mentions pricing ($0.01 USDC on Base) and authorization requirements, giving comprehensive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
solana_wallet_security_audit_rulesAInspect
Return metadata for the cipher-solana-wallet-audit v1.1.0 ruleset — the free MIT GitHub Action that catches plaintext Solana private keys, seed phrases, and leaked .env files in CI. Free, no payment required. Intended for educational use and agent-driven repo hardening.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns metadata (a read operation) and mentions the ruleset is free and MIT-licensed, adding some behavioral context like cost and licensing. However, it lacks details on rate limits, error handling, or response format, which are important for a tool with no output schema. It doesn't contradict annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first part states the core purpose, followed by additional context about the ruleset's function and intended use. It avoids unnecessary fluff, though the repetition of 'free' could be slightly trimmed. Every sentence adds value, such as clarifying the ruleset's purpose and usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 0 parameters, the description provides basic completeness: it explains what the tool does and some context (free, MIT, educational use). However, for a tool returning metadata, it doesn't detail the structure or content of the returned data, which could be important for an AI agent. It's adequate but has clear gaps in output expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameters need documentation. The description doesn't add parameter info, but that's fine since there are none. Baseline is 4 for 0 parameters, as it doesn't need to compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return metadata for the cipher-solana-wallet-audit v1.1.0 ruleset.' It specifies the resource (the ruleset) and the action (return metadata), though it doesn't explicitly distinguish from sibling tools like 'solana_wallet_scan' which might have overlapping domains. The additional context about the ruleset's function (catching plaintext Solana private keys, etc.) helps clarify scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: 'Intended for educational use and agent-driven repo hardening,' which suggests when to use it. However, it doesn't explicitly state when not to use it or name alternatives among siblings (e.g., vs. 'solana_wallet_scan' or 'github_repo_health'). The mention of 'free, no payment required' hints at a cost consideration but isn't a full guideline.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!