TeleKash Oracle
Server Details
Prediction market probability oracle for AI agents. 26 tools across 1000+ live markets from Kalshi and Polymarket. No install required, connect directly via streamable HTTP.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
9 toolsgenerate_api_keyAInspect
Generate a free TeleKash API key. Free tier: 100 calls/day. Save the key — shown once.
| Name | Required | Description | Default |
|---|---|---|---|
| owner_id | Yes | Agent or user identifier | |
| owner_email | No | Contact email (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses rate limits (100 calls/day), cost tier (free), and ephemeral output behavior (shown once). Does not clarify if multiple calls create multiple keys or invalidate existing ones, or exact return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: (1) purpose, (2) rate limits, (3) critical ephemeral warning. Front-loaded with action, compact delivery of constraints. No redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists; description compensates by explaining the key is ephemeral and must be saved. Missing explicit return value structure, but 'shown once' sufficiently signals the sensitive output nature for this simple 2-param credential generation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage describing owner_id and owner_email. Description adds no parameter-specific guidance, but baseline is 3 when schema_coverage >80% per rubric. No syntax hints or examples provided beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Generate' with clear resource 'API key'. The 'free TeleKash' qualifier distinguishes this from paid key generation (if it existed) and the action clearly contrasts with read-only siblings (get_* and search_* tools).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage constraint 'Save the key — shown once' which signals immediate persistence is required. Includes rate limit context '100 calls/day'. Does not explicitly contrast with get_usage sibling, but the setup nature vs. read operations is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_historyBInspect
Get historical probability changes and trend data for a market over 1h/24h/7d/30d.
| Name | Required | Description | Default |
|---|---|---|---|
| market_id | Yes | Market UUID or external_id | |
| timeframe | No | Time range (default: 24h) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return type (historical probability changes/trend data) but omits details like data granularity, pagination, or error behavior for invalid market IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 12-word sentence with action-first structure. Zero redundancy; every word serves the definition. Appropriately sized for a 2-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for tool complexity: 2 simple parameters, 100% schema coverage, no nested objects. Without annotations, could explicitly state this is read-only, but 'Get' and 'historical' sufficiently imply safe read behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete descriptions for both parameters. Description adds value by mapping '1h/24h/7d/30d' to the timeframe parameter, aligning text with enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' with specific resource 'historical probability changes and trend data' and scope '1h/24h/7d/30d'. Implies distinction from get_probability (current) but doesn't explicitly differentiate from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or alternative comparison. The word 'historical' implies use for past trends vs current data, but lacks explicit direction on choosing between this and get_market_stats or get_probability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_statsBInspect
Get aggregate statistics — total markets, categories, sources, and volume.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Get' implies read-only safety, the description omits rate limits, caching behavior, real-time vs cached data status, and whether this is an expensive operation to invoke.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence front-loaded with the core action. Em-dash efficiently enumerates return values without verbosity. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a zero-parameter aggregation tool. Specifies what statistics are returned (totals and volume), which suffices given the lack of output schema and simple input requirements, though scoping (global vs filtered) could be clearer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting the baseline score of 4 per evaluation guidelines. Schema is empty object with 100% coverage vacuously satisfied.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' and resource 'aggregate statistics' with clear enumeration of return data points (markets, categories, sources, volume). Distinguishes implicitly from siblings like list_markets (individual records) and get_probability (specific metric), though explicit contrast is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus siblings like list_markets or get_usage. No mention of access patterns or prerequisites despite having multiple related tools that return market data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_probabilityAInspect
Get real-time probability for any prediction market outcome. Returns YES/NO probabilities (0-100%), volume, liquidity, and market metadata from Kalshi and Polymarket.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Natural language search query (alternative to market_id) | |
| market_id | No | Market UUID or external_id |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses real-time nature, specific data sources (Kalshi/Polymarket), and detailed return structure (YES/NO 0-100%, volume, liquidity, metadata). Lacks rate limits or auth requirements but covers core behavioral traits well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the action and resource; second sentence details return values. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates effectively for missing output schema by detailing return values (probabilities, volume, liquidity, metadata) in description. Covers data sources and real-time aspect. Adequate for a simple 2-parameter read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description mentions 'any prediction market outcome' which aligns with the flexible querying capability, but does not add syntax details or explicit parameter relationships beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('real-time probability for any prediction market outcome') + scope (Kalshi and Polymarket). Clearly distinguishes from siblings like search_markets, list_markets, or get_market_stats by focusing specifically on probability retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through return value description (probabilities, volume, liquidity) but lacks explicit when-to-use guidance versus alternatives like search_markets or get_market_stats. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sentimentBInspect
Get AI-powered sentiment analysis (-1 to 1), recommendation, and confidence for a market.
| Name | Required | Description | Default |
|---|---|---|---|
| market_id | Yes | Market UUID or external_id |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses output format (scale -1 to 1) and return components (sentiment, recommendation, confidence), which is valuable. However, lacks operational details like rate limits, latency implications of 'AI-powered', caching behavior, or determinism.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded action ('Get'), includes specific scale constraint (-1 to 1), and enumerates three distinct output components efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema, describing the three return values (sentiment score, recommendation, confidence) and their scale provides adequate completeness. Missing only operational details like rate limiting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (market_id described as 'Market UUID or external_id'). Description doesn't mention the parameter, but with complete schema documentation, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' and clear resource (sentiment analysis for a market). Uniquely distinguishes from siblings like get_probability and get_market_stats by specifying 'AI-powered sentiment' with output components (recommendation, confidence) and scale (-1 to 1).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to select sentiment analysis versus sibling alternatives like get_probability or get_market_stats. Doesn't indicate use cases (e.g., 'use when analyzing market mood vs. price action').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trendingCInspect
Markets with biggest probability swings — momentum detection for trending events.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max markets (default: 10, max: 25) | |
| timeframe | No | Lookback window (default: 24h) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers only the algorithmic hint (probability swings). It omits safety confirmations (read-only vs write), rate limits, caching behavior, or return structure details that would help the agent understand invocation constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with front-loaded key concept (probability swings). The em-dash construction efficiently packs two related descriptors into a compact form without redundancy. Appropriately sized for a simple read-only tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a low-complexity tool with 100% schema coverage, but lacks critical behavioral context expected when no annotations exist (e.g., output format, real-time vs cached data). The description defines the 'trending' algorithm sufficiently but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the parameter meanings are already clear from the schema (limit and timeframe). The description adds no additional parameter context, syntax guidance, or dependency notes, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description effectively specifies the resource (markets) and the specific behavior (probability swings, momentum detection). It distinguishes from siblings like list_markets and search_markets by clarifying that 'trending' refers to volatility in probabilities rather than just popular or recent markets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to choose this over alternatives like get_history or list_markets. While 'momentum detection' implies use cases, there is no explicit when-to-use or when-not-to-use guidance, nor comparison to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_usageAInspect
Check API usage, rate limits, and tier status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Carries full disclosure burden due to missing annotations. While 'Check' implies read-only access and the description lists returned data categories, it omits critical behavioral details like response format structure, units of measurement, or whether this call consumes rate limit quota itself.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 7-word sentence with zero filler. Front-loads action verb and efficiently lists three data targets in parallel structure without redundant phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter administrative utility by identifying what information is retrieved, though would benefit from noting this returns current account status versus historical logs (get_history).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters (baseline 4). Description appropriately refrains from parameter discussion since none exist, avoiding false claims.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Check' and clearly enumerates three distinct data categories retrieved (API usage, rate limits, tier status). However, fails to explicitly differentiate from sibling 'get_history' which may conceptually overlap with usage data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to invoke this tool (e.g., before high-volume operations) versus alternatives like 'get_history', nor mentions prerequisites such as authentication requirements despite accessing account-sensitive data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_marketsBInspect
Browse prediction markets across 7 categories with filtering and sorting. 500+ markets from Kalshi, Polymarket, and Metaculus.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max markets (default: 10, max: 50) | |
| source | No | Filter by source | |
| sort_by | No | Sort order | |
| category | No | Filter by category (default: all) | |
| jurisdiction | No | Filter by jurisdiction |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. It mentions data provenance (Kalshi, Polymarket, Metaculus) and volume ('500+ markets'), giving context about coverage. However, it omits pagination behavior, rate limits, and what 'browse' returns when no filters are applied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences. First establishes core functionality and capabilities; second provides data provenance and scale. No redundancy or wasted words—every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich schema (5 parameters with enums) and absence of output schema/annotations, the description adequately covers scope but lacks explanation of return values, default behaviors when called with no arguments, and explicit differentiation from 'search_markets'. It meets minimum viability but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear enum values and types. The description mentions '7 categories' and specific sources which align with enum options but doesn't add syntax details, constraints, or inter-parameter relationships beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (browse) and resource (prediction markets) with specific scope (7 categories, filtering/sorting). It implicitly distinguishes from sibling 'search_markets' by emphasizing browsing/filtering over search, though explicit guidance would strengthen this.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance is provided. It does not clarify when to use 'browse' (this tool) versus 'search_markets' (text search), nor does it note that all parameters are optional or suggest default usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_marketsBInspect
Search 500+ prediction markets by keyword or natural language query.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 10, max: 50) | |
| query | Yes | Search query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses corpus scope ('500+') but omits return format (crucial given no output_schema), caching behavior, or rate limiting. 'Search' implies read-only operation, but this is not explicitly confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the core action and resource. No redundant words. '500+' and 'natural language' efficiently convey scope and capability without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with full schema coverage, but lacks output_schema. Description fails to indicate what the tool returns (market IDs, titles, probabilities?), leaving a gap in contextual completeness given no annotations or structured return schema to reference.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds significant value by specifying 'natural language query' capability beyond the schema's generic 'Search query', signaling the agent can pass free-form text rather than just keywords. Also clarifies the domain (prediction markets).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Search') and resource ('500+ prediction markets'), including scope. Distinguishes implicitly from siblings like list_markets by mentioning 'keyword or natural language query', but lacks explicit differentiation (e.g., 'use this instead of list_markets when filtering by keywords').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this tool versus alternatives like list_markets or get_trending. While 'natural language query' implies usage for text-based discovery, it does not state prerequisites, exclusions, or comparative scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!