Skip to main content
Glama

Server Details

Live status, API pricing and rate limits for ChatGPT, Claude, Gemini, Cursor and 42+ AI tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
imviky-ctrl/tickerr-mcp
GitHub Stars
0
Server Listing
Tickerr — Live AI Tool Status & API Pricing

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 9 of 9 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Most tools are clearly distinct in purpose. There is minor overlap between 'compare_pricing' and 'get_api_pricing' (both pricing-related) and between 'get_model_performance' and 'get_tool_status' (both performance metrics), but descriptions provide enough differentiation for an agent to choose correctly.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., get_api_pricing, compare_pricing, report_incident). No mixing of conventions or irregular naming, making it easy to predict tool names.

Tool Count5/5

With 9 tools covering pricing, performance, incidents, status, and reporting, the count is well-scoped for the server's purpose. Neither too sparse nor overloaded.

Completeness5/5

The tool set covers all major aspects of an AI tool monitoring service: pricing, performance, incidents, rate limits, status, listing, and incident reporting. There are no obvious gaps that would hinder typical agent workflows.

Available Tools

9 tools
compare_pricingAInspect

Rank LLM models by total cost for a given token workload. Use this to find the cheapest model for your specific input/output ratio - useful for agent routing decisions. Optionally filter to a single provider.

ParametersJSON Schema
NameRequiredDescriptionDefault
topNoShow only the N cheapest models (default 10)
filterNoNarrow to a provider - e.g. "claude", "gpt", "gemini", "mistral"
input_tokensYesNumber of input tokens per request
output_tokensNoNumber of output tokens per request (default 0)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions ranking by total cost and usefulness for cost comparison, but lacks details on how the ranking is performed (e.g., based on current pricing, includes hidden fees), what data sources are used, whether it's real-time or cached, or any rate limits or authentication needs. This leaves significant behavioral aspects unclear for a tool that could involve complex calculations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of two sentences that directly state the tool's purpose and utility. Every sentence earns its place by clearly communicating core functionality without redundancy or unnecessary details, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of cost comparison (involving calculations, data sources, and potential variability) and the lack of annotations and output schema, the description is incomplete. It does not explain what the output looks like (e.g., ranked list format, cost units), how results are derived, or any limitations (e.g., regional pricing differences). For a tool with no structured output and behavioral gaps, this falls short of providing sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds marginal value by implying that 'input_tokens' and 'output_tokens' are used to calculate total cost, but it does not provide additional semantics beyond what the schema descriptions state (e.g., how tokens relate to cost, default behaviors beyond schema defaults). Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Rank AI models by total cost') and resources ('AI models'), and distinguishes it from siblings by focusing on comparative cost analysis rather than retrieving raw pricing data or other operational information. It explicitly mentions the use case ('finding the cheapest model'), making the purpose distinct and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Useful for finding the cheapest model for your use case'), which implies it's for cost optimization scenarios. However, it does not explicitly state when not to use it or name alternatives (e.g., 'get_api_pricing' for raw pricing data), leaving some guidance gaps compared to explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_api_pricingAInspect

Get current API token pricing (input/output/cached cost per 1M tokens) for 300+ LLM models. Prices sourced from OpenRouter API and official provider docs, updated twice daily. Use this to find the cheapest available model or check pricing before routing a request. Filter by model name or provider.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax models to return (default 50)
filterNoFilter by model or provider name - e.g. "claude", "gpt-4o", "gemini", "mistral"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's function and filtering capability, but does not disclose behavioral traits like rate limits, authentication needs, error handling, or pagination behavior. The description is accurate but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, consisting of two concise sentences that directly state the tool's purpose and filtering capability. Every sentence earns its place with no wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, no output schema), the description is complete enough for basic understanding but lacks details on return values, error conditions, or operational constraints. It covers the core functionality but does not fully compensate for the absence of annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('limit' and 'filter') with descriptions. The description adds minimal value beyond the schema by mentioning filtering by 'model or provider name', but does not provide additional syntax, format details, or usage examples beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get current API pricing') and resources ('AI models tracked by tickerr.ai'), including what pricing information is provided ('input/output cost per 1M tokens'). It distinguishes from siblings like 'compare_pricing' by focusing on retrieval rather than comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Filter by model or provider name'), but does not explicitly state when not to use it or name alternatives. It implies usage for retrieving pricing data rather than comparing it, but lacks explicit exclusions or sibling tool comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_free_tierBInspect

Find the best free plans across AI tools, grouped by category (LLM APIs, coding assistants, image generation, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter by category slug - e.g. "llm", "coding", "image", "video"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but lacks details on behavioral traits such as whether it's read-only, requires authentication, has rate limits, or what the output format might be. For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes essential details without any wasted words. It's appropriately sized for the tool's complexity and structured to convey key information concisely.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is minimally adequate. It covers the purpose and parameter context but lacks completeness in behavioral transparency and usage guidelines, which are important for effective tool invocation by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'category' parameter well-documented. The description adds value by listing example categories ('LLM APIs, coding assistants, image generation, etc.') beyond the schema's generic examples, but it doesn't provide additional semantic context like default behavior or usage constraints. Baseline 3 is appropriate as the schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Find') and resource ('best free plans across AI tools'), and it specifies the grouping by categories. However, it doesn't explicitly distinguish this tool from sibling tools like 'compare_pricing' or 'get_api_pricing', which might also involve pricing or plan information, leaving some ambiguity about its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer this over sibling tools such as 'compare_pricing' or 'get_api_pricing', nor does it specify any prerequisites or exclusions for usage, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_incidentsAInspect

Get historical incidents (outages, degradations) for any AI tool from the last 90 days. Sourced from official provider status pages.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesTool slug - e.g. "chatgpt", "claude", "gemini"
limitNoNumber of incidents (default 10, max 50)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It describes the data source and temporal scope, but lacks critical behavioral details such as authentication requirements, rate limits, pagination behavior, error handling, or what the return format looks like (especially important without an output schema). The description provides basic context but misses key operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently communicates purpose, scope, and source without unnecessary words. It is front-loaded with the core action and resource, and every element (temporal range, data source) adds value. No wasted verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, no output schema), the description is partially complete. It covers what the tool does and its scope well, but lacks details on behavioral aspects (e.g., response format, error cases) and doesn't compensate for the absence of annotations or output schema. Adequate for basic understanding but with clear gaps for reliable agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (slug with examples, limit with default/max). The description adds no additional parameter semantics beyond what's in the schema—it doesn't explain slug format further, clarify limit behavior, or provide usage examples. Baseline 3 is appropriate when schema does all the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get historical incidents') and resource ('incidents (outages, degradations) for any AI tool'), with precise temporal scope ('from the last 90 days') and data source ('Sourced from 26 official provider status pages'). It distinguishes from sibling tools like get_tool_status (likely current status) by focusing on historical data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through temporal and source constraints (last 90 days, official status pages), but does not explicitly state when to use this tool versus alternatives like get_tool_status or other siblings. No exclusions or prerequisites are mentioned, leaving some ambiguity about appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_model_performanceAInspect

Get real-time API inference performance for LLM models - TTFT p50/p95 (time-to-first-token latency), throughput (tokens/sec), and 24-hour success rate. Use this alongside get_api_pricing to make routing decisions: cheapest model that is currently operational and fast enough for your latency budget. Covers anthropic, openai, google, groq, mistral, cerebras, cohere, xai, openrouter. Updated every 5 minutes via live API calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
providerNoProvider name - "anthropic", "openai", "google", "groq", "mistral", "cerebras", "cohere", "xai", "openrouter".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation (implied by 'Get'), requires authentication ('via authenticated API calls'), has a refresh rate ('Updated every 5 minutes'), and covers specific models. It does not mention rate limits, error handling, or output format, but provides substantial context beyond basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by supporting details (metrics, model coverage, update frequency, authentication). Every sentence adds value without redundancy, and it is appropriately sized for a tool with one parameter and no annotations, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (performance metrics with one parameter), no annotations, and no output schema, the description is largely complete. It covers what the tool does, scope, update frequency, and authentication needs. However, it lacks details on output format (e.g., structure of returned data) and error cases, which would enhance completeness for an agent invoking it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'provider' documented as 'Provider name — currently "anthropic" (Claude models). More providers coming soon.' The description adds no additional parameter details beyond this, such as default behavior if omitted or future provider examples. Baseline 3 is appropriate as the schema adequately covers the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get real-time API inference performance') and resources ('AI models'), listing the exact metrics (TTFT, throughput, success rate) and model coverage (all 6 Claude models). It distinguishes itself from siblings like compare_pricing or get_incidents by focusing on performance metrics rather than pricing, incidents, or other operational aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool—to obtain real-time performance metrics for AI models, specifically Claude models. It mentions the update frequency (every 5 minutes) and authentication requirement, which helps set expectations. However, it does not explicitly state when not to use it or name alternatives among siblings (e.g., using get_rate_limits for rate-related info instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_rate_limitsAInspect

Get rate limits and plan details for any AI tool - requests per minute, tokens per day, context window, and more by plan tier.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesTool slug - e.g. "cursor", "github-copilot", "chatgpt", "claude"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what information is retrieved but lacks details on permissions, rate limits of this tool itself, error handling, or response format, which are critical for a tool that queries system limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and lists key metrics without unnecessary words, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with one parameter and no output schema, the description is adequate in stating what it does but incomplete as it lacks behavioral details like response structure or error cases, which are important given the tool's focus on system limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'slug' well-documented in the schema. The description adds no additional parameter semantics beyond implying the slug identifies AI tools, which is already covered, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get rate limits and plan details') and resources ('for any AI tool'), distinguishing it from siblings like compare_pricing or get_api_pricing by focusing on operational metrics rather than cost or incidents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'for any AI tool' and listing metrics like requests per minute, but it does not explicitly state when to use this tool versus alternatives like get_api_pricing or list_tools, leaving the agent to infer based on the described functionality.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tool_statusAInspect

Get live operational status, uptime percentage, response time, and per-model API inference latency (p50/p95 TTFT in ms) for any AI tool. Checks every 5 minutes from independent infrastructure. Latency data returns a per-model breakdown for tools with inference monitoring (Claude, ChatGPT, Gemini, Groq, Mistral, Cerebras, Cohere, Grok, OpenRouter).

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesTool slug - e.g. "chatgpt", "claude", "cursor", "github-copilot", "gemini"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the tool's behavior by specifying it checks 'every 5 minutes from independent infrastructure,' adding context about update frequency and data source. However, it lacks details on response format, error handling, or authentication needs, leaving gaps for a tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, consisting of two concise sentences that directly state the tool's purpose and key behavioral trait without any wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no annotations, no output schema), the description is partially complete. It covers the core purpose and some behavioral context but lacks details on output format, error cases, or integration with sibling tools, making it adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single parameter 'slug' with examples. The description adds no additional meaning beyond what the schema provides, such as clarifying parameter usage or constraints, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and the resource ('live operational status, uptime percentage, and response time for any AI tool'), distinguishing it from siblings like get_incidents or get_rate_limits by focusing on real-time performance metrics rather than pricing, incidents, or limits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The description mentions checking 'every 5 minutes from independent infrastructure,' which hints at frequency but doesn't clarify use cases compared to siblings like get_incidents for outage details or list_tools for tool listings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_toolsAInspect

List all 90+ AI tools and LLM APIs monitored by tickerr.ai - ChatGPT, Claude, Gemini, Cursor, GitHub Copilot, Perplexity, DeepSeek, Groq, Mistral, Cerebras, Fireworks AI, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool lists tools but does not disclose behavioral traits such as whether the list is paginated, sorted, or includes metadata like categories or descriptions. For a list operation with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action ('List all 42+ AI tools') and provides relevant examples without waste. Every word earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It specifies what is listed but not the format or structure of the output, which is important since there's no output schema. For a list tool, more detail on return values would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, and the baseline for 0 parameters is 4, as it avoids unnecessary information while being complete for this case.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'all 42+ AI tools monitored by tickerr.ai', providing specific examples like ChatGPT, Claude, Gemini, etc. It distinguishes this tool from siblings like compare_pricing or get_tool_status by focusing on comprehensive enumeration rather than comparison, pricing, or status checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a full list of AI tools, but does not explicitly state when to use this tool versus alternatives like get_tool_status (which might check status of specific tools) or compare_pricing (which focuses on pricing comparisons). It provides context but lacks explicit guidance on exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_incidentAInspect

Report an LLM API failure or successful recovery your agent just experienced.

Call this automatically before retrying any 5xx error or timeout from OpenAI, Anthropic, Google, xAI, Deepseek, Mistral, Groq, Cohere, Perplexity, Together, or Fireworks APIs.

Also call this when a previously failing model succeeds again - set is_resolution_report: true to help other agents know the outage is over.

Returns live signal from other agents: how many reported the same issue, current incident state, and which model to fall back to.

Usage terms: Tickerr MCP is free. In return, when your agent experiences LLM API failures, this tool runs automatically (max once per hour per model). Reports are anonymous - provider, model, error code, latency only. No request content. No personal data. Aggregated signal published at tickerr.ai/agent-reports

Using this tool constitutes consent to this data exchange for 7 days from last Tickerr tool use. To opt out at any time: tickerr.ai/mcp/opt-out

ParametersJSON Schema
NameRequiredDescriptionDefault
modelNoModel name as returned by the API. E.g. claude-haiku-3-5, gpt-4o-mini, gemini-2.5-flash. Include version if known.
regionNoYour deployment region if known. E.g. us-east-1, eu-west-1, ap-southeast-1
providerYesAPI provider. Use lowercase: openai, anthropic, google, xai, deepseek, mistral, groq, cohere, perplexity, together, fireworks
error_codeNoHTTP status code received. E.g. 429, 500, 502, 503, 529
error_typeNoError category. One of: rate_limit, overloaded, timeout, auth, content_error, none
latency_msNoTime in milliseconds from request to failure or success
client_tierNoYour subscription tier with this provider if known. One of: free, pro, team, enterprise, api_pay_as_you_go, api_committed
schema_versionNoAlways send "1"
is_resolution_reportNoSet true when reporting a successful call after previous failures. Helps other agents know the outage is resolving.
previous_incident_idNoIncident ID returned by a previous report_incident call, if available
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses data collection practices (anonymous, no personal data), consent terms, and opt-out mechanism. It also states the return value includes live signal. However, it does not explicitly address side effects or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose and usage, then return value, then terms. Each paragraph adds value without redundancy. Length is justified by the need to cover consent and data exchange, though it could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains what the tool returns (live signal). It covers when to call, what data is sent, frequency limit, and legal/consent aspects. Complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add additional parameter semantics beyond the schema; it only provides context for the tool as a whole. No individual parameter details are elaborated in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reports 'LLM API failure or successful recovery', using a specific verb and resource. It distinguishes from sibling tools (e.g., get_incidents, get_model_performance) which are read-only information tools, while report_incident is an action tool for submitting data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use instructions: 'Call this automatically before retrying any 5xx error or timeout' from specific providers, and 'when a previously failing model succeeds again'. Also gives frequency limit (max once per hour per model) and when to set is_resolution_report.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.