spraay-solana-gateway
Server Details
Batch send SOL or any SPL token to 1000+ wallets via x402. AI agent payments on Solana.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- plagtech/spraay-solana-gateway
- GitHub Stars
- 0
- Server Listing
- spraay-solana-gateway
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolsspraay_solana_batch_send_solAInspect
Batch send SOL to multiple Solana wallets in one call. Supports up to 1000 recipients. Auto-chunks large batches. Costs $0.01 USDC via x402 protocol.
| Name | Required | Description | Default |
|---|---|---|---|
| recipients | Yes | Array of recipients |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses critical behavioral traits: auto-chunking mechanism for large batches and the $0.01 USDC cost via x402 protocol. This provides essential operational context beyond the basic schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste: (1) core function, (2) scale limits, (3) internal optimization (auto-chunking), (4) cost structure. Every sentence provides distinct operational value essential for invocation decisions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a batch transaction tool without output schema: covers scale limits, cost model, and internal batching behavior. Missing only explicit return value description or error handling specifics, but covers all critical invocation prerequisites.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (address and amount fully documented). Description adds value by confirming the 'up to 1000 recipients' constraint from maxItems, but does not extend semantics beyond what the well-documented schema already provides. Baseline 3 is appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Batch send SOL to multiple Solana wallets' clearly identifies the verb (batch send), resource (SOL), and scope. Explicitly distinguishes from sibling spraay_solana_batch_send_token by specifying SOL rather than generic tokens.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context by specifying SOL and noting the 1000 recipient limit and auto-chunking behavior. While it doesn't explicitly name the token alternative sibling, the SOL specificity combined with the tool name provides clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
spraay_solana_batch_send_tokenAInspect
Batch send any SPL token (USDC, BONK, WIF, JUP, etc.) to multiple Solana wallets. Auto-creates token accounts. Costs $0.01 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| mint | Yes | SPL token mint address (base58) | |
| recipients | Yes | Array of recipients |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure given zero annotations: 'Auto-creates token accounts' reveals critical side effects (account creation), and 'Costs $0.01 USDC via x402' explains pricing/payment mechanism. However, lacks details on idempotency, partial failure behavior in batch, or transaction confirmation timing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tight sentences with zero waste: core function first, side effect second, cost third. Every clause earns its place. Appropriate density for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a batch transaction tool but gaps remain: no output schema provided and description doesn't explain return values (transaction signature? success/failure per recipient?). Cost and auto-creation are covered, but error handling for partial batch failures is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline applies. Description adds value via concrete token examples (USDC, BONK, etc.) clarifying acceptable mint inputs, but doesn't elaborate on recipients array limits (max 1000) or amount precision requirements beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: specific verb 'Batch send' + resource 'SPL token' + scope 'multiple Solana wallets'. Examples (USDC, BONK, WIF, JUP) concreteize the token type, implicitly distinguishing from sibling 'batch_send_sol' which handles native SOL instead of SPL tokens.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied differentiation from siblings by specifying SPL tokens (vs SOL), but lacks explicit 'when to use' guidance, prerequisites, or error handling instructions. User must infer this is for fungible tokens versus the SOL-specific alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
spraay_solana_quoteAInspect
Get a cost estimate for a Spraay batch send on Solana. Returns fees, tx count, and timing. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| token | No | Token symbol: SOL, USDC, BONK, etc. | |
| recipients | Yes | Number of recipients |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return values ('fees, tx count, and timing') and invocation cost ('Costs $0.001 USDC via x402'), which is critical behavioral context. Could improve by explicitly stating this is a read-only simulation with no blockchain side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: purpose (sentence 1), return values (sentence 2), invocation cost (sentence 3). Every sentence delivers unique value. Front-loaded with clear action statement. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage for a quote tool: explains what it estimates (fees, tx count, timing), includes domain-specific cost disclosure (x402), and references the Solana/Spraay ecosystem. Missing only explicit confirmation that this is read-only/safe to call without side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear descriptions for both 'token' (including examples) and 'recipients'. Description adds no explicit parameter guidance, but schema is self-documenting. Baseline 3 appropriate when structured schema carries full semantic load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get a cost estimate for a Spraay batch send on Solana' - specific verb (get) + resource (cost estimate) + domain context (Spraay on Solana). Distinguishes from sibling send tools by focusing on estimation rather than execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'quote' terminology and cost estimation purpose, but lacks explicit guidance such as 'use this before batch_send' or 'for pricing only, does not execute transactions.' Sibling differentiation relies on parsing tool names rather than description guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
spraay_solana_statusBInspect
Check the status of a Solana transaction by its signature. Costs $0.001 USDC via x402.
| Name | Required | Description | Default |
|---|---|---|---|
| txid | Yes | Solana transaction signature |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical cost behavior ($0.001 USDC via x402), which is essential for agent decision-making. However, omits other traits: doesn't confirm read-only/idempotent nature, error conditions, or retry behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with purpose, followed by cost implication. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple single-parameter tool, but lacks description of return values or status states (confirmed/pending/failed) given no output schema exists. Cost disclosure compensates partially for missing annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (txid parameter fully described). Description mentions 'by its signature' which aligns semantically with 'txid', but adds no format details or constraints beyond the schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Check' + resource 'Solana transaction status' + identifier 'signature'. Distinct from batch_send and quote siblings by function (lookup vs. submit/price).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or workflow context (e.g., 'use after submitting transaction'). Cost disclosure implies it's a paid lookup, but doesn't distinguish when to use vs. alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.