Skip to main content
Glama
TeleKashOracle

telekash-mcp-server

get_market_stats

Retrieve aggregate statistics across prediction markets including totals, categories, sources, and trading volume for market overview and portfolio allocation decisions.

Instructions

Get aggregate statistics across all prediction markets — totals, categories, sources, and volume.

Returns total market count, active markets, category distribution, source breakdown (Kalshi vs Polymarket), and aggregate trading volume. Use for market overview, portfolio allocation decisions, or understanding the prediction market landscape.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler implementation for get_market_stats, querying telekash_markets and aggregating results by category and source.
    async function getMarketStats(supabase: SupabaseClient): Promise<ToolResult> {
      const { data: markets, error } = await supabase
        .from("telekash_markets")
        .select("id, status, category, source");
      if (error) throw new Error(`Stats error: ${error.message}`);
    
      const all = markets || [];
      const byCategory: Record<string, number> = {};
      const bySource: Record<string, number> = {};
      for (const m of all) {
        byCategory[m.category || "other"] =
          (byCategory[m.category || "other"] || 0) + 1;
        bySource[m.source || "unknown"] =
          (bySource[m.source || "unknown"] || 0) + 1;
  • The tool registration/dispatch logic within the main tool handler switch block.
    case "get_market_stats":
      return getMarketStats(supabase);
  • The MCP tool definition and schema for get_market_stats.
    {
      name: "get_market_stats",
      description: `Get aggregate statistics — total markets, categories, sources, and volume.`,
      inputSchema: { type: "object", properties: {}, required: [] },
    },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what data is returned but doesn't mention behavioral aspects like whether this is a read-only operation (implied but not stated), potential rate limits, authentication requirements, or data freshness. The description adds value by specifying the scope ('across all prediction markets') but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise. The first sentence states the core purpose, the second enumerates specific statistics returned, and the third provides usage guidance. Every sentence earns its place with no redundant information, and it's front-loaded with the most important information first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's zero parameters and lack of annotations/output schema, the description provides good contextual completeness. It explains what statistics are returned and when to use the tool. However, without an output schema, it could benefit from more detail about the return format (e.g., whether it's a single object with nested fields). The description is complete enough for basic understanding but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (schema coverage 100%), so the description doesn't need to explain parameters. The baseline for zero parameters is 4, and the description appropriately focuses on what the tool does rather than parameter semantics. It mentions the scope ('across all prediction markets') which helps understand the implicit parameterization.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get aggregate statistics') and resources ('across all prediction markets'), listing the exact types of statistics returned. It distinguishes from siblings like 'list_markets' (which likely lists individual markets) and 'get_trending' (which focuses on trending markets rather than aggregate statistics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use for market overview, portfolio allocation decisions, or understanding the prediction market landscape.' This provides clear guidance on appropriate use cases, distinguishing it from siblings that serve different purposes like getting specific market probabilities or searching/filtering markets.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TeleKashOracle/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server