Skip to main content
Glama
0xReisearch

REI Crypto MCP Server

by 0xReisearch

get_bridge_details

Retrieve bridge volume summary and chain breakdown by providing a bridge ID. Use this tool to analyze cross-chain transaction data from crypto bridges.

Instructions

GET /bridge/{id}

Get summary of bridge volume and volume breakdown by chain.

Parameters:
    id: bridge ID (can be retrieved from /bridges)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function for the 'get_bridge_details' MCP tool. It is decorated with @mcp.tool() for registration and executes the tool logic by making an API request to the DefiLlama bridges endpoint '/bridge/{id}' using the shared 'make_request' helper, returning the JSON response as a string.
    @mcp.tool()
    async def get_bridge_details(id: int) -> str:
        """GET /bridge/{id}
        
        Get summary of bridge volume and volume breakdown by chain.
        
        Parameters:
            id: bridge ID (can be retrieved from /bridges)
        """
        result = await make_request('GET', f'/bridge/{id}')
        return str(result)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a GET operation, implying it's likely read-only, but doesn't confirm if it's safe or has any side effects. It also doesn't mention rate limits, authentication needs, or what the output looks like (though an output schema exists, which helps). For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: it starts with the HTTP method and endpoint, states the purpose clearly, and lists parameters with brief explanations. Every sentence earns its place, with no redundant information, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter) and the presence of an output schema (which handles return values), the description is reasonably complete. It covers the purpose, parameter semantics, and a hint on usage. However, it lacks behavioral details like safety or rate limits, which would be beneficial since no annotations are provided, keeping it from a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter: 'id: bridge ID (can be retrieved from /bridges)'. This clarifies what 'id' represents and how to obtain it, which is valuable since the input schema has 0% description coverage (only providing type and title). With one parameter, this compensation is sufficient to score highly, though it doesn't detail format constraints beyond being an integer.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get summary of bridge volume and volume breakdown by chain.' This specifies the verb ('Get'), resource ('bridge'), and what information is retrieved ('summary of bridge volume and volume breakdown by chain'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'get_bridge_volume' or 'get_bridge_day_stats', which might offer similar or overlapping data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance: it mentions that the 'id' parameter 'can be retrieved from /bridges', which hints at a prerequisite but doesn't specify when to use this tool versus alternatives. There's no explicit guidance on when to choose this over sibling tools like 'get_bridge_volume' or 'get_bridge_transactions', leaving the agent to infer based on the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/0xReisearch/crypto-mcp-beta'

If you have feedback or need assistance with the MCP directory API, please join our Discord server