Solana MCP by Vybe
Server Details
Solana MCP for wallets, trades, markets, PnL, transfers, onchain data, signable swaps and API tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- vybenetwork/solana-mcp-vybe
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 4 of 4 tools scored. Lowest: 3.1/5.
Each tool has a clearly distinct purpose: execute-request performs API calls, get-endpoint retrieves detailed endpoint info, list-endpoints provides a structured overview of all endpoints, and search-endpoints enables keyword-based discovery. There is no overlap or ambiguity in their functions, making it easy for an agent to select the right tool.
The tools follow a consistent verb-noun pattern with hyphens (e.g., execute-request, get-endpoint), except for search-endpoints which uses a plural noun. This minor deviation does not hinder readability, but the naming is mostly uniform and predictable across the set.
With 4 tools, the server is well-scoped for its purpose of API exploration and execution. Each tool earns its place by covering distinct aspects: listing, searching, getting details, and executing requests, which aligns with typical API interaction workflows without being overly sparse or bloated.
The tool surface provides complete coverage for API interaction: list-endpoints and search-endpoints enable discovery, get-endpoint offers detailed information, and execute-request allows execution. This covers the full lifecycle from finding to using endpoints, with no obvious gaps or dead ends in the domain.
Available Tools
4 toolsexecute-requestBDestructiveInspect
Executes an API request with given HAR
| Name | Required | Description | Default |
|---|---|---|---|
| harRequest | Yes | HAR request object |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, openWorldHint=true, and destructiveHint=true, covering safety and scope. The description adds value by specifying that it executes an API request, implying it performs network calls and may have side effects, which aligns with the annotations. However, it doesn't detail rate limits, authentication needs, or error handling beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence: 'Executes an API request with given HAR'. It's concise with zero wasted words, clearly stating the core action without unnecessary elaboration, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (executing arbitrary API requests with destructive potential), lack of output schema, and rich annotations, the description is incomplete. It doesn't explain what HAR is, provide examples, detail response formats, or warn about risks like network errors or side effects, leaving significant gaps for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'harRequest' documented as a 'HAR request object'. The description adds minimal meaning by mentioning 'given HAR', but it doesn't explain HAR format or usage beyond the schema. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Executes an API request with given HAR', which specifies the verb ('executes') and resource ('API request'), but it's vague about what 'HAR' means and doesn't differentiate from sibling tools like 'get-endpoint' or 'list-endpoints'. It provides a basic purpose but lacks specificity about the scope or domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives like 'get-endpoint' or 'list-endpoints'. There's no mention of prerequisites, such as needing a HAR-formatted request, or exclusions, leaving the agent to infer usage from the tool name and parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get-endpointARead-onlyInspect
Gets detailed information about a specific API endpoint, including security schemes and servers
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | ||
| method | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true and openWorldHint=false, indicating it's a safe read operation with limited scope. The description adds context about the type of information returned ('security schemes and servers'), which is useful beyond annotations. However, it doesn't disclose potential limitations like error handling or response format, leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes key details without waste. It's appropriately sized for a tool with two parameters and clear annotations, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 required parameters, no output schema), the description is complete enough for basic use. It covers what the tool does and the type of information returned. However, without an output schema, it could benefit from more detail on the response structure, but annotations provide safety context, making it adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema only defines parameter types without descriptions. The description doesn't add any parameter-specific details, such as explaining what 'path' and 'method' represent or their expected formats. Baseline is 3 due to high schema coverage (100% for required parameters), but no extra semantic value is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Gets') and resource ('detailed information about a specific API endpoint'), specifying what information is retrieved ('including security schemes and servers'). It distinguishes from siblings like 'list-endpoints' by focusing on a single endpoint rather than listing multiple, though it doesn't explicitly differentiate from 'search-endpoints' or 'execute-request'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving details of a known endpoint, suggesting it's for inspection rather than execution or listing. However, it doesn't explicitly state when to use this tool versus alternatives like 'search-endpoints' for finding endpoints or 'execute-request' for making API calls, leaving some ambiguity in context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list-endpointsARead-onlyInspect
Lists all API paths and their HTTP methods with summaries, organized by path. Results can be passed directly into 'get-endpoint'.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=false, indicating a safe, bounded read operation. The description adds valuable context beyond annotations by specifying the output format ('organized by path') and the direct usability with 'get-endpoint', enhancing behavioral understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and adds a practical usage note in the second. Both sentences earn their place by providing essential information without redundancy, making it efficiently structured and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema) and annotations covering safety and boundedness, the description is largely complete. It explains what the tool does and how to use the results. A minor gap is the lack of detail on output structure (e.g., format specifics), but this is acceptable for a list tool with good annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4 as per rules. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and output usage, which is sufficient given the lack of inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Lists all API paths and their HTTP methods with summaries') and distinguishes it from siblings by mentioning 'Results can be passed directly into 'get-endpoint''. It explicitly identifies the resource (API paths) and organizational method (organized by path), avoiding tautology with the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage by stating that results can be passed to 'get-endpoint', implying this tool is for discovery and the sibling is for detailed retrieval. However, it does not explicitly state when not to use it or mention alternatives like 'search-endpoints' for filtered searches, leaving some guidance implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search-endpointsARead-onlyInspect
Performs a deep search through paths, operations, and component schemas to discover relevant API endpoints. Use this tool to find specific API capabilities, required parameters, or data models based on search keywords. Results can be passed directly into 'get-endpoint'.
| Name | Required | Description | Default |
|---|---|---|---|
| pattern | Yes | Search pattern (case-insensitive) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and openWorldHint=false, covering safety and scope. The description adds context about the search being 'deep' and covering 'paths, operations, and component schemas,' which clarifies behavior beyond annotations. However, it lacks details on output format, pagination, or error handling, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and workflow integration in two concise sentences. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search function with 1 parameter), annotations cover safety and scope, and schema fully documents the input. The description adds purpose, usage, and sibling integration, but lacks output schema details (e.g., result format), which is a minor gap. Overall, it's mostly complete for the context provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'pattern' documented as 'Search pattern (case-insensitive).' The description adds minimal value by mentioning 'search keywords,' but doesn't elaborate on syntax, examples, or constraints beyond what the schema provides. Baseline 3 is appropriate as the schema handles most documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Performs a deep search') and target resources ('paths, operations, and component schemas') to 'discover relevant API endpoints.' It distinguishes from siblings like 'list-endpoints' (likely a simple listing) and 'get-endpoint' (retrieves a specific endpoint), making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when to use this tool ('to find specific API capabilities, required parameters, or data models based on search keywords') and mentions an alternative ('Results can be passed directly into 'get-endpoint''), which helps the agent understand the workflow and when to choose this over other tools like 'list-endpoints' or 'execute-request.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!