Skip to main content
Glama

Server Details

Solana MCP for wallets, trades, markets, PnL, transfers, onchain data, signable swaps and API tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
vybenetwork/solana-mcp-vybe
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 4 of 4 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: execute-request performs API calls, get-endpoint retrieves detailed endpoint info, list-endpoints provides a structured overview of all endpoints, and search-endpoints enables keyword-based discovery. There is no overlap or ambiguity in their functions, making it easy for an agent to select the right tool.

Naming Consistency4/5

The tools follow a consistent verb-noun pattern with hyphens (e.g., execute-request, get-endpoint), except for search-endpoints which uses a plural noun. This minor deviation does not hinder readability, but the naming is mostly uniform and predictable across the set.

Tool Count5/5

With 4 tools, the server is well-scoped for its purpose of API exploration and execution. Each tool earns its place by covering distinct aspects: listing, searching, getting details, and executing requests, which aligns with typical API interaction workflows without being overly sparse or bloated.

Completeness5/5

The tool surface provides complete coverage for API interaction: list-endpoints and search-endpoints enable discovery, get-endpoint offers detailed information, and execute-request allows execution. This covers the full lifecycle from finding to using endpoints, with no obvious gaps or dead ends in the domain.

Available Tools

4 tools
execute-requestB
Destructive
Inspect

Executes an API request with given HAR

ParametersJSON Schema
NameRequiredDescriptionDefault
harRequestYesHAR request object
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, openWorldHint=true, and destructiveHint=true, covering safety and scope. The description adds value by specifying that it executes an API request, implying it performs network calls and may have side effects, which aligns with the annotations. However, it doesn't detail rate limits, authentication needs, or error handling beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence: 'Executes an API request with given HAR'. It's concise with zero wasted words, clearly stating the core action without unnecessary elaboration, making it efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (executing arbitrary API requests with destructive potential), lack of output schema, and rich annotations, the description is incomplete. It doesn't explain what HAR is, provide examples, detail response formats, or warn about risks like network errors or side effects, leaving significant gaps for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'harRequest' documented as a 'HAR request object'. The description adds minimal meaning by mentioning 'given HAR', but it doesn't explain HAR format or usage beyond the schema. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Executes an API request with given HAR', which specifies the verb ('executes') and resource ('API request'), but it's vague about what 'HAR' means and doesn't differentiate from sibling tools like 'get-endpoint' or 'list-endpoints'. It provides a basic purpose but lacks specificity about the scope or domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives like 'get-endpoint' or 'list-endpoints'. There's no mention of prerequisites, such as needing a HAR-formatted request, or exclusions, leaving the agent to infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-endpointA
Read-only
Inspect

Gets detailed information about a specific API endpoint, including security schemes and servers

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYes
methodYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true and openWorldHint=false, indicating it's a safe read operation with limited scope. The description adds context about the type of information returned ('security schemes and servers'), which is useful beyond annotations. However, it doesn't disclose potential limitations like error handling or response format, leaving behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes key details without waste. It's appropriately sized for a tool with two parameters and clear annotations, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 required parameters, no output schema), the description is complete enough for basic use. It covers what the tool does and the type of information returned. However, without an output schema, it could benefit from more detail on the response structure, but annotations provide safety context, making it adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema only defines parameter types without descriptions. The description doesn't add any parameter-specific details, such as explaining what 'path' and 'method' represent or their expected formats. Baseline is 3 due to high schema coverage (100% for required parameters), but no extra semantic value is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Gets') and resource ('detailed information about a specific API endpoint'), specifying what information is retrieved ('including security schemes and servers'). It distinguishes from siblings like 'list-endpoints' by focusing on a single endpoint rather than listing multiple, though it doesn't explicitly differentiate from 'search-endpoints' or 'execute-request'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving details of a known endpoint, suggesting it's for inspection rather than execution or listing. However, it doesn't explicitly state when to use this tool versus alternatives like 'search-endpoints' for finding endpoints or 'execute-request' for making API calls, leaving some ambiguity in context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list-endpointsA
Read-only
Inspect

Lists all API paths and their HTTP methods with summaries, organized by path. Results can be passed directly into 'get-endpoint'.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=false, indicating a safe, bounded read operation. The description adds valuable context beyond annotations by specifying the output format ('organized by path') and the direct usability with 'get-endpoint', enhancing behavioral understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds a practical usage note in the second. Both sentences earn their place by providing essential information without redundancy, making it efficiently structured and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and annotations covering safety and boundedness, the description is largely complete. It explains what the tool does and how to use the results. A minor gap is the lack of detail on output structure (e.g., format specifics), but this is acceptable for a list tool with good annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4 as per rules. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and output usage, which is sufficient given the lack of inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Lists all API paths and their HTTP methods with summaries') and distinguishes it from siblings by mentioning 'Results can be passed directly into 'get-endpoint''. It explicitly identifies the resource (API paths) and organizational method (organized by path), avoiding tautology with the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by stating that results can be passed to 'get-endpoint', implying this tool is for discovery and the sibling is for detailed retrieval. However, it does not explicitly state when not to use it or mention alternatives like 'search-endpoints' for filtered searches, leaving some guidance implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search-endpointsA
Read-only
Inspect

Performs a deep search through paths, operations, and component schemas to discover relevant API endpoints. Use this tool to find specific API capabilities, required parameters, or data models based on search keywords. Results can be passed directly into 'get-endpoint'.

ParametersJSON Schema
NameRequiredDescriptionDefault
patternYesSearch pattern (case-insensitive)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and openWorldHint=false, covering safety and scope. The description adds context about the search being 'deep' and covering 'paths, operations, and component schemas,' which clarifies behavior beyond annotations. However, it lacks details on output format, pagination, or error handling, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and workflow integration in two concise sentences. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search function with 1 parameter), annotations cover safety and scope, and schema fully documents the input. The description adds purpose, usage, and sibling integration, but lacks output schema details (e.g., result format), which is a minor gap. Overall, it's mostly complete for the context provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'pattern' documented as 'Search pattern (case-insensitive).' The description adds minimal value by mentioning 'search keywords,' but doesn't elaborate on syntax, examples, or constraints beyond what the schema provides. Baseline 3 is appropriate as the schema handles most documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Performs a deep search') and target resources ('paths, operations, and component schemas') to 'discover relevant API endpoints.' It distinguishes from siblings like 'list-endpoints' (likely a simple listing) and 'get-endpoint' (retrieves a specific endpoint), making the purpose explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use this tool ('to find specific API capabilities, required parameters, or data models based on search keywords') and mentions an alternative ('Results can be passed directly into 'get-endpoint''), which helps the agent understand the workflow and when to choose this over other tools like 'list-endpoints' or 'execute-request.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.