Skip to main content
Glama

get_arbitrage_live

Detect real-time arbitrage opportunities across prediction markets by scanning platforms like Kalshi and Polymarket for price discrepancies that meet specified spread thresholds.

Instructions

Run a fresh cross-platform arbitrage scan (may take 10-30 seconds).

Args: min_spread: Minimum spread threshold (0.0-1.0). Default 0.02 (2%).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
min_spreadNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The tool `get_arbitrage_live` is defined here as an MCP tool, which calls the `/v1/arbitrage/live` endpoint using an internal `_request` helper function.
    @mcp.tool()
    async def get_arbitrage_live(min_spread: float = 0.02) -> str:
        """Run a fresh cross-platform arbitrage scan (may take 10-30 seconds).
    
        Args:
            min_spread: Minimum spread threshold (0.0-1.0). Default 0.02 (2%).
        """
        return await _request("GET", "/v1/arbitrage/live", params={"min_spread": min_spread})
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses critical behavioral traits: the scan is 'fresh' (real-time calculation) and has high latency (10-30 seconds). However, it omits other behavioral details like error handling, rate limits, or whether this operation consumes API credits/quotas.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with the primary action in the first sentence, followed by a clear Args section. Every sentence earns its place—there is no redundancy or unnecessary elaboration. The timing warning is appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single-parameter simplicity and existence of an output schema (which covers return values), the description is appropriately complete. It covers the action, performance characteristics, and parameter details. A minor gap is the lack of explicit differentiation from 'get_arbitrage,' but this is sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fully compensates via the Args section. It provides the parameter's semantics (minimum spread threshold), valid range (0.0-1.0), and default value (0.02/2%), giving complete context that the schema fails to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'fresh cross-platform arbitrage scan' with specific verb (run/scan) and resource (arbitrage opportunities). The term 'fresh' effectively distinguishes it from sibling tool 'get_arbitrage' (implying cached vs. live data), though it doesn't explicitly name the alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage guidance by warning that the operation 'may take 10-30 seconds,' hinting it shouldn't be used when immediate results are needed. However, it fails to explicitly direct users to 'get_arbitrage' as the faster/cached alternative or clarify when live vs. cached data is preferable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rekko-AI/rekko-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server