Skip to main content
Glama

screen_markets

Screen prediction markets by score, volume, or specific IDs to identify opportunities with actionable recommendations for analysis, watching, or skipping.

Instructions

Batch screen markets by score, volume, or specific IDs.

Returns scored markets with an action recommendation: "analyze", "watch", or "skip".

Args: market_ids: Optional list of specific market IDs to screen. platform: Filter by platform: "kalshi", "polymarket", or "" for all. min_volume_24h: Minimum 24h volume filter. min_score: Minimum composite score filter. limit: Maximum number of results to return.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
market_idsNo
platformNo
min_volume_24hNo
min_scoreNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Implementation of the 'screen_markets' tool in the Rekko MCP server. It proxies requests to the '/v1/screen' API endpoint.
    async def screen_markets(
        market_ids: list[str] | None = None,
        platform: str = "",
        min_volume_24h: float = 0.0,
        min_score: float = 0.0,
        limit: int = 50,
    ) -> str:
        """Batch screen markets by score, volume, or specific IDs.
    
        Returns scored markets with an action recommendation: "analyze", "watch", or "skip".
    
        Args:
            market_ids: Optional list of specific market IDs to screen.
            platform: Filter by platform: "kalshi", "polymarket", or "" for all.
            min_volume_24h: Minimum 24h volume filter.
            min_score: Minimum composite score filter.
            limit: Maximum number of results to return.
        """
        body: dict = {"limit": limit}
        if market_ids:
            body["market_ids"] = market_ids
        if platform:
            body["platform"] = platform
        if min_volume_24h > 0:
            body["min_volume_24h"] = min_volume_24h
        if min_score > 0:
            body["min_score"] = min_score
        return await _request("POST", "/v1/screen", json=body)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It effectively discloses the tool's behavioral output by specifying that it returns 'scored markets with an action recommendation' and explicitly lists the three possible recommendation values ('analyze', 'watch', 'skip'). This is crucial behavioral context for agent decision-making, though operational details like rate limits or caching are absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement, return value description, and a dedicated Args section. Information is front-loaded with the core function and output type. The Args section is efficiently formatted without redundancy, though it occupies more space than a high-schema-coverage tool would require.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately focuses on parameter documentation and high-level behavioral output rather than detailed return structure. With 5 parameters and complex filtering logic, the description provides sufficient context for invocation by documenting all filters and the recommendation system, though it could note whether the operation is read-only.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, requiring the description to compensate fully. The Args section successfully documents all 5 parameters (market_ids, platform, min_volume_24h, min_score, limit) with sufficient semantics, including valid enum values for platform ('kalshi', 'polymarket', or ''). This fully compensates for the schema deficiency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'batch screening' of markets using score, volume, or specific IDs as criteria. It further distinguishes itself from sibling tools by specifying that it returns markets with action recommendations ('analyze', 'watch', or 'skip'), which implies a filtering/validation function distinct from simple listing or deep analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The term 'Batch screen' implies the tool is for bulk filtering operations, and the parameter descriptions suggest when to use specific filters (e.g., min_volume_24h). However, it lacks explicit guidance on when to choose this over siblings like search_markets, list_markets, or analyze_market, leaving the selection criteria somewhat implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rekko-AI/rekko-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server