beefy-mcp
Server Details
Beefy - 10 tools for protocol data and tools
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- junct-bot/beefy-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
10 toolsget_api_v1_investor_investor_address_timelineCInspect
GET /api/v1/investor/{investor_address}/timeline
| Name | Required | Description | Default |
|---|---|---|---|
| investor_address | Yes | path parameter: investor_address (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosure. It indicates a read operation via 'GET' but fails to describe what data the timeline contains, expected response format, pagination behavior, or rate limiting. The behavioral disclosure is minimal but not contradictory.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief, the description is inappropriately short given the lack of annotations and output schema. It wastes its single sentence on restating the URL path rather than front-loading critical information about the timeline's content or purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having only one parameter, the tool lacks an output schema and annotations. The description does not compensate by explaining what the timeline returns, leaving the agent uninformed about the tool's utility. For a data retrieval tool, this is a significant gap in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for its single parameter (investor_address). The description adds no additional semantic context about what constitutes a valid investor address (e.g., blockchain address format, checksum requirements), so it meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description merely restates the API endpoint path ('GET /api/v1/investor/{investor_address}/timeline'), which is already encoded in the tool name. While it implies fetching a timeline resource, it does not explain what 'timeline' means in this context (transaction history? investment milestones?) nor distinguish its purpose from sibling tools like vault or price endpoints.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There are no stated prerequisites, conditions for use, or indicators of when an agent should prefer this over querying vault-specific investor endpoints or other history-related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_statusBInspect
[DISCOVERY] GET /api/v1/status Returns: { arbitrum: { subgraph: string, tag: string, blockNumber: string, timestamp: string, hasErrors: boolean }[], avax: { subgraph: string, tag: string, blockNumber: string, timestamp: string, hasErrors: boolean }[], base: { subgraph: string, tag: string, blockNumber: string, timestamp: string, hasErrors: boolean }[], ... }.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden of behavioral disclosure. It compensates partially by detailing the return structure (subgraph, tag, blockNumber, hasErrors for each chain), but fails to disclose safety characteristics (read-only nature implied by GET but not stated), rate limits, authentication requirements, or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is information-dense and front-loaded with the HTTP method and endpoint path. However, the inline JSON return structure is dense and difficult to parse visually. While every element serves a purpose (especially compensating for the missing output schema), better formatting or separation of concerns would improve clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (zero parameters) and absence of a structured output schema, the description provides adequate completeness by embedding the full return structure inline. For a simple status-checking tool, this level of documentation is sufficient to understand both inputs and outputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description does not need to compensate for missing parameter documentation, though it appropriately reflects the parameter-less nature of this status endpoint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a discovery endpoint ('[DISCOVERY] GET /api/v1/status') that retrieves status information across multiple blockchain networks (arbitrum, avax, base, etc.). It effectively distinguishes itself from siblings which target specific vault addresses or investor timelines by indicating it returns global subgraph status data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it identifies itself as a '[DISCOVERY]' endpoint, it does not explain discovery use cases, prerequisites, or when to prefer this over the vault-specific or investor-specific sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vault_chain_vault_address_harvestsDInspect
GET /api/v1/vault/{chain}/{vault_address}/harvests
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | The chain the vault is on | |
| vault_address | Yes | The vault contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, yet the description discloses nothing about return format, pagination behavior, data freshness, or side effects. While 'GET' implies read-only, the description fails to explain what data structure is returned or what constitutes a 'harvest' event, leaving the agent without behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief (one line), but this represents under-specification rather than efficient conciseness. The single provided line restates information already present in the tool name without adding semantic value, failing the 'every sentence earns its place' standard.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite being a simple 2-parameter endpoint, the tool involves domain-specific concepts (harvests, vaults, chains) that require explanation. With no output schema, no annotations, and no description of return values or data structure, the definition leaves critical gaps for an agent trying to understand what data this tool retrieves.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both 'chain' and 'vault_address' having adequate descriptions in the schema itself. The tool description adds no additional parameter context (format examples, valid chains, address checksum requirements), so it meets the baseline expectation when the schema carries the descriptive burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description merely transcribes the HTTP path (GET /api/v1/vault/{chain}/{vault_address}/harvests), which closely mirrors the tool name. It fails to explain what 'harvests' means in this DeFi/vault context (likely yield harvesting events) and does not distinguish from siblings like get_api_v1_vaults_chain_harvests_since (bulk vs. single vault queries).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. Given siblings like get_api_v1_vaults_chain_harvests_since (plural vaults) and get_api_v1_vault_chain_vault_address_investors, the description should specify this retrieves historical harvest events for a specific vault address, but it provides no comparative context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vault_chain_vault_address_investorsBInspect
Get all investor positions for a vault Returns: { investor_address: string, total_shares_balance: string, underlying_balance0: string, ... }.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | The chain the vault is on | |
| vault_address | Yes | The vault contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It partially compensates by documenting the return structure (field names like investor_address, total_shares_balance), which is helpful given the missing output schema. However, it lacks safety disclosures (idempotency, read-only nature), error behaviors, or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but awkwardly structured, concatenating the purpose and return value documentation into one sentence: '...for a vault Returns: {...}'. The return specification feels appended rather than integrated, though the overall length is appropriate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter tool, the description is minimally adequate. The inline return documentation partially compensates for the lack of a formal output schema, though it omits whether the result is an array or single object, and lacks authentication or rate limit guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('The chain the vault is on', 'The vault contract address'), the baseline is established. The description adds no additional parameter semantics, syntax examples, or format constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'all investor positions for a vault' using the specific verb 'Get'. It implicitly distinguishes from siblings like 'harvests' or 'prices' by specifying 'investor positions' as the resource, though it doesn't explicitly contrast with the investor-specific timeline endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, particularly the sibling 'get_api_v1_investor_investor_address_timeline'. It doesn't mention prerequisites (e.g., obtaining the vault_address first) or specific use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vault_chain_vault_address_move_ticksBInspect
Get all move ticks for a vault — Get all move ticks for a vault, excluding deposits and withdrawals
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | The chain the vault is on | |
| end_time | No | End timestamp (exclusive) | |
| start_time | No | Start timestamp (inclusive) | |
| vault_address | Yes | The vault contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden of behavioral disclosure. It adds valuable filtering context (excludes deposits/withdrawals) but omits critical safety information—such as whether this is read-only, if it requires authentication, or pagination behavior—that agents need when annotations are absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The text repeats 'Get all move ticks for a vault' twice (separated by an em-dash), creating redundancy without adding information. The second clause containing the exclusion criteria is valuable, but the repetition wastes tokens and indicates poor structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists and no annotations are provided, the description should explain return values or response structure. It fails to define the domain-specific term 'move ticks' or describe what data is returned, leaving agents uncertain about the tool's output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all four parameters including inclusive/exclusive timestamp semantics. The description adds no parameter-specific details, but baseline 3 is appropriate since the schema fully documents inputs without needing supplemental description text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the resource (move ticks) and scope (all for a specific vault). It effectively distinguishes from siblings like 'investors' or 'harvests' by explicitly excluding deposits and withdrawals, clarifying this retrieves internal vault movement events only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance by stating what is excluded (deposits/withdrawals), hinting that other tools handle those transaction types. However, it fails to explicitly name sibling alternatives or state prerequisites like requiring the vault_address to be valid.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vault_chain_vault_address_priceCInspect
GET /api/v1/vault/{chain}/{vault_address}/price
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | path parameter: chain ("arbitrum" | "avax" | "base" | "berachain" | "bsc" | "gnosis" | "hyperevm" | "linea" | "lisk" | "manta" | "mantle" | "megaeth" | "mode" | "monad" | "moonbeam" | "optimism" | "plasma" | "polygon" | "rootstock" | "saga" | "scroll" | "sei" | "sonic" | "zksync") | |
| vault_address | Yes | path parameter: vault_address (string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It only indicates the HTTP method (GET) through the path string, implying a read-only operation, but lacks information on authentication requirements, rate limiting, caching behavior, or the structure of the returned price data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (single line), but fails to front-load any meaningful semantic information. While not verbose, the brevity reflects under-specification rather than efficient communication, as every word merely echoes the endpoint path.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with only two parameters and no output schema, the description is still inadequate. It fails to explain the business context of the 'price' resource, the expected return format, or how it differs from related endpoints, leaving significant gaps in the agent's understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (both parameters have descriptions, including the enum values for 'chain'), the description does not need to compensate for missing schema details. It adds no additional parameter semantics beyond the schema, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description merely restates the HTTP endpoint path (GET /api/v1/vault/{chain}/{vault_address}/price), which is essentially a tautology of the function name. While it implies retrieving a 'price', it fails to specify what this price represents (e.g., vault share price, underlying token price) or distinguish itself from sibling historical price tools like 'prices_period_since'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the sibling historical price endpoints (prices_range_period, prices_period_since), nor any mention of prerequisites such as valid vault addresses or supported chains beyond the raw schema enum.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vault_chain_vault_address_prices_period_sinceDInspect
GET /api/v1/vault/{chain}/{vault_address}/prices/{period}/{since}
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | The chain the vault is on | |
| since | Yes | The unix timestamp to start from | |
| period | Yes | The snapshot period for prices | |
| vault_address | Yes | The vault contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not state what data is returned (historical price snapshots), response format, pagination behavior, rate limits, or error conditions (e.g., invalid vault addresses).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief (single line), the description wastes its space repeating the endpoint path rather than front-loading value. It is technically concise but fails to earn its place by providing actionable information about the tool's functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 required parameters, no output schema, no annotations, and the complexity of time-series price data, the description is grossly incomplete. It omits the nature of the returned data, how 'period' and 'since' interact to define the time window, and whether the data is aggregated or raw.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all 4 parameters have basic descriptions like 'The unix timestamp to start from'), so the baseline is 3. The description adds no additional context about valid formats for 'period' (e.g., '1h' vs 'daily') or value constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description merely restates the HTTP endpoint path ('GET /api/v1/vault/{chain}/{vault_address}/prices/{period}/{since}'), which is essentially a tautology of the tool name. It fails to explain that the tool retrieves historical price data for a DeFi vault or what the time-series data represents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings like 'get_api_v1_vault_chain_vault_address_prices_range_period' or the singular 'price' endpoint. No mention of prerequisites, data availability constraints, or recommended usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vault_chain_vault_address_prices_range_periodCInspect
GET /api/v1/vault/{chain}/{vault_address}/prices/range/{period} Returns: { min: number, max: number }.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | The chain the vault is on | |
| period | Yes | The snapshot period for prices | |
| vault_address | Yes | The vault contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the return structure ({min, max}) which compensates for the missing output schema, but omits read-only nature, rate limits, error conditions, and valid parameter values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with zero redundancy. However, the brevity borders on under-specification; the front-loaded URL path consumes valuable descriptive space that could explain business logic.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite mentioning the return structure, the description lacks crucial domain context for a financial data tool: valid period enum values, timestamp formats, and how min/max are calculated (e.g., over the entire period or snapshots).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with basic descriptions for all three parameters. The description adds no additional semantic context beyond the schema (e.g., no examples of valid 'period' strings or chain identifiers), warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it retrieves price range data (min/max) for a vault period, but merely echoes the URL path structure. It fails to distinguish from siblings like 'get_api_v1_vault_chain_vault_address_price' (singular) or clarify what 'range' signifies in business terms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus the singular 'price' endpoint or 'prices_period_since'. No mention of valid period formats (e.g., '1d', '7d') or prerequisites like vault existence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vaults_chain_harvests_sinceCInspect
GET /api/v1/vaults/{chain}/harvests/{since}
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | The chain to return vaults harvest data for | |
| since | Yes | The unix timestamp to return harvests since | |
| vaults | No | The vault addresses to return harvests for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but discloses nothing beyond the HTTP method implying read-only access. It omits what constitutes a harvest event, return data structure, pagination behavior, or whether the since parameter is inclusive/exclusive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief (single sentence), it is inappropriately concise - the sentence fails to earn its place by providing no semantic value beyond the tool name itself. It is not structured to help an agent understand the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without annotations or output schema, the description should explain the domain concepts (vaults, harvests) and return format. For a 3-parameter DeFi analytics tool, the description is inadequate - it doesn't explain what data is returned or how this bulk endpoint differs from single-vault alternatives.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (chain, since, and vaults all have descriptions), establishing a baseline of 3. The description adds no additional parameter context such as expected chain identifier formats, timestamp units, or vault address formatting requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'GET /api/v1/vaults/{chain}/harvests/{since}' is tautological - it restates the HTTP path encoded in the tool name without explaining what 'harvests' are (DeFi yield events) or what data is returned. It fails to distinguish from sibling tool get_api_v1_vault_chain_vault_address_harvests which fetches harvests for a single vault versus this tool's batch/multiple vault capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the sibling get_api_v1_vault_chain_vault_address_harvests endpoint, nor when to provide the optional 'vaults' parameter versus omitting it. No mention of prerequisites like valid chain identifiers or timestamp formats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_v1_vaults_chain_periodCInspect
GET /api/v1/vaults/{chain}/{period}
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | The chain the vault is on | |
| period | Yes | The period to return APR for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but only hints at read-only behavior via 'GET'. It fails to disclose return format, pagination behavior, rate limits, or that the period parameter relates to APR calculations (mentioned only in schema).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While not verbose, the description is inappropriately undersized for a tool with multiple siblings. The single path string fails to front-load critical context (e.g., APR data retrieval), representing under-specification rather than efficient conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite simple parameters (2 strings, no nesting), the description is incomplete. No output schema exists, yet the description fails to explain return values (vault objects? APR percentages?) or distinguish this from the 8+ sibling vault tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with 'chain' and 'period' adequately documented in the properties. The description adds no parameter semantics beyond the path template, warranting the baseline score for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description merely restates the HTTP endpoint path 'GET /api/v1/vaults/{chain}/{period}', which tautologically mirrors the tool name. It fails to explain what data is retrieved (vault list vs. APR aggregates), what constitutes a 'period', or how this differs from sibling vault tools like get_api_v1_vaults_chain_harvests_since.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus the more specific vault address endpoints (e.g., get_api_v1_vault_chain_vault_address_price) or the harvests endpoint. No prerequisites or filtering logic is explained.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!