tickerr
Server Details
Live status, API pricing and rate limits for ChatGPT, Claude, Gemini, Cursor and 42+ AI tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- imviky-ctrl/tickerr-mcp
- GitHub Stars
- 0
- Server Listing
- Tickerr — Live AI Tool Status & API Pricing
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 7 of 7 tools scored.
Each tool has a clearly distinct purpose with no overlap: compare_pricing for cost ranking, get_api_pricing for pricing details, get_free_tier for free plans, get_incidents for outages, get_rate_limits for rate limits, get_tool_status for live status, and list_tools for catalog. The descriptions reinforce distinct use cases, eliminating any ambiguity.
All tools follow a consistent verb_noun pattern with 'get_' or 'compare_' prefixes (compare_pricing, get_api_pricing, get_free_tier, get_incidents, get_rate_limits, get_tool_status, list_tools). The naming is uniform and predictable, making it easy to understand each tool's function at a glance.
With 7 tools, the count is well-scoped for the server's purpose of monitoring and comparing AI tools. Each tool serves a specific, non-redundant function in this domain, from pricing and status to incidents and listings, making the set comprehensive yet manageable.
The tool surface is complete for the domain of AI tool monitoring and comparison. It covers all key aspects: pricing (compare and detailed), free tiers, incidents, rate limits, live status, and a full catalog. There are no obvious gaps; agents can perform end-to-end analysis and decision-making without dead ends.
Available Tools
8 toolscompare_pricingAInspect
Rank AI models by total cost for a given token workload. Useful for finding the cheapest model for your use case.
| Name | Required | Description | Default |
|---|---|---|---|
| top | No | Show only the N cheapest models (default 10) | |
| filter | No | Narrow to a provider — e.g. "claude", "gpt", "gemini" | |
| input_tokens | Yes | Number of input tokens per request | |
| output_tokens | No | Number of output tokens per request (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions ranking by total cost and usefulness for cost comparison, but lacks details on how the ranking is performed (e.g., based on current pricing, includes hidden fees), what data sources are used, whether it's real-time or cached, or any rate limits or authentication needs. This leaves significant behavioral aspects unclear for a tool that could involve complex calculations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, consisting of two sentences that directly state the tool's purpose and utility. Every sentence earns its place by clearly communicating core functionality without redundancy or unnecessary details, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of cost comparison (involving calculations, data sources, and potential variability) and the lack of annotations and output schema, the description is incomplete. It does not explain what the output looks like (e.g., ranked list format, cost units), how results are derived, or any limitations (e.g., regional pricing differences). For a tool with no structured output and behavioral gaps, this falls short of providing sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds marginal value by implying that 'input_tokens' and 'output_tokens' are used to calculate total cost, but it does not provide additional semantics beyond what the schema descriptions state (e.g., how tokens relate to cost, default behaviors beyond schema defaults). Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Rank AI models by total cost') and resources ('AI models'), and distinguishes it from siblings by focusing on comparative cost analysis rather than retrieving raw pricing data or other operational information. It explicitly mentions the use case ('finding the cheapest model'), making the purpose distinct and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Useful for finding the cheapest model for your use case'), which implies it's for cost optimization scenarios. However, it does not explicitly state when not to use it or name alternatives (e.g., 'get_api_pricing' for raw pricing data), leaving some guidance gaps compared to explicit sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_api_pricingAInspect
Get current API pricing (input/output cost per 1M tokens) for AI models tracked by tickerr.ai. Filter by model or provider name.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max models to return (default 50) | |
| filter | No | Filter by model or tool name — e.g. "claude", "gpt-4o", "gemini" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's function and filtering capability, but does not disclose behavioral traits like rate limits, authentication needs, error handling, or pagination behavior. The description is accurate but lacks operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, consisting of two concise sentences that directly state the tool's purpose and filtering capability. Every sentence earns its place with no wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, no output schema), the description is complete enough for basic understanding but lacks details on return values, error conditions, or operational constraints. It covers the core functionality but does not fully compensate for the absence of annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('limit' and 'filter') with descriptions. The description adds minimal value beyond the schema by mentioning filtering by 'model or provider name', but does not provide additional syntax, format details, or usage examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get current API pricing') and resources ('AI models tracked by tickerr.ai'), including what pricing information is provided ('input/output cost per 1M tokens'). It distinguishes from siblings like 'compare_pricing' by focusing on retrieval rather than comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Filter by model or provider name'), but does not explicitly state when not to use it or name alternatives. It implies usage for retrieving pricing data rather than comparing it, but lacks explicit exclusions or sibling tool comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_free_tierBInspect
Find the best free plans across AI tools, grouped by category (LLM APIs, coding assistants, image generation, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by category slug — e.g. "llm", "coding", "image", "video" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but lacks details on behavioral traits such as whether it's read-only, requires authentication, has rate limits, or what the output format might be. For a tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes essential details without any wasted words. It's appropriately sized for the tool's complexity and structured to convey key information concisely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is minimally adequate. It covers the purpose and parameter context but lacks completeness in behavioral transparency and usage guidelines, which are important for effective tool invocation by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'category' parameter well-documented. The description adds value by listing example categories ('LLM APIs, coding assistants, image generation, etc.') beyond the schema's generic examples, but it doesn't provide additional semantic context like default behavior or usage constraints. Baseline 3 is appropriate as the schema does most of the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Find') and resource ('best free plans across AI tools'), and it specifies the grouping by categories. However, it doesn't explicitly distinguish this tool from sibling tools like 'compare_pricing' or 'get_api_pricing', which might also involve pricing or plan information, leaving some ambiguity about its unique role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer this over sibling tools such as 'compare_pricing' or 'get_api_pricing', nor does it specify any prerequisites or exclusions for usage, leaving the agent to infer context from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_incidentsAInspect
Get historical incidents (outages, degradations) for any AI tool from the last 90 days. Sourced from 26 official provider status pages.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Tool slug — e.g. "chatgpt", "claude", "gemini" | |
| limit | No | Number of incidents (default 10, max 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It describes the data source and temporal scope, but lacks critical behavioral details such as authentication requirements, rate limits, pagination behavior, error handling, or what the return format looks like (especially important without an output schema). The description provides basic context but misses key operational traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates purpose, scope, and source without unnecessary words. It is front-loaded with the core action and resource, and every element (temporal range, data source) adds value. No wasted verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, no output schema), the description is partially complete. It covers what the tool does and its scope well, but lacks details on behavioral aspects (e.g., response format, error cases) and doesn't compensate for the absence of annotations or output schema. Adequate for basic understanding but with clear gaps for reliable agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (slug with examples, limit with default/max). The description adds no additional parameter semantics beyond what's in the schema—it doesn't explain slug format further, clarify limit behavior, or provide usage examples. Baseline 3 is appropriate when schema does all the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get historical incidents') and resource ('incidents (outages, degradations) for any AI tool'), with precise temporal scope ('from the last 90 days') and data source ('Sourced from 26 official provider status pages'). It distinguishes from sibling tools like get_tool_status (likely current status) by focusing on historical data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through temporal and source constraints (last 90 days, official status pages), but does not explicitly state when to use this tool versus alternatives like get_tool_status or other siblings. No exclusions or prerequisites are mentioned, leaving some ambiguity about appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_model_performanceAInspect
Get real-time API inference performance for AI models — TTFT (time-to-first-token), throughput (tokens/sec), and 24-hour success rate. Currently covers all 6 Claude models (Haiku, Sonnet, Opus). Updated every 5 minutes via authenticated API calls.
| Name | Required | Description | Default |
|---|---|---|---|
| provider | No | Provider name — currently "anthropic" (Claude models). More providers coming soon. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation (implied by 'Get'), requires authentication ('via authenticated API calls'), has a refresh rate ('Updated every 5 minutes'), and covers specific models. It does not mention rate limits, error handling, or output format, but provides substantial context beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by supporting details (metrics, model coverage, update frequency, authentication). Every sentence adds value without redundancy, and it is appropriately sized for a tool with one parameter and no annotations, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (performance metrics with one parameter), no annotations, and no output schema, the description is largely complete. It covers what the tool does, scope, update frequency, and authentication needs. However, it lacks details on output format (e.g., structure of returned data) and error cases, which would enhance completeness for an agent invoking it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'provider' documented as 'Provider name — currently "anthropic" (Claude models). More providers coming soon.' The description adds no additional parameter details beyond this, such as default behavior if omitted or future provider examples. Baseline 3 is appropriate as the schema adequately covers the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get real-time API inference performance') and resources ('AI models'), listing the exact metrics (TTFT, throughput, success rate) and model coverage (all 6 Claude models). It distinguishes itself from siblings like compare_pricing or get_incidents by focusing on performance metrics rather than pricing, incidents, or other operational aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool—to obtain real-time performance metrics for AI models, specifically Claude models. It mentions the update frequency (every 5 minutes) and authentication requirement, which helps set expectations. However, it does not explicitly state when not to use it or name alternatives among siblings (e.g., using get_rate_limits for rate-related info instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_rate_limitsAInspect
Get rate limits and plan details for any AI tool — requests per minute, tokens per day, context window, and more by plan tier.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Tool slug — e.g. "cursor", "github-copilot", "chatgpt", "claude" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what information is retrieved but lacks details on permissions, rate limits of this tool itself, error handling, or response format, which are critical for a tool that queries system limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and lists key metrics without unnecessary words, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one parameter and no output schema, the description is adequate in stating what it does but incomplete as it lacks behavioral details like response structure or error cases, which are important given the tool's focus on system limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'slug' well-documented in the schema. The description adds no additional parameter semantics beyond implying the slug identifies AI tools, which is already covered, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get rate limits and plan details') and resources ('for any AI tool'), distinguishing it from siblings like compare_pricing or get_api_pricing by focusing on operational metrics rather than cost or incidents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for any AI tool' and listing metrics like requests per minute, but it does not explicitly state when to use this tool versus alternatives like get_api_pricing or list_tools, leaving the agent to infer based on the described functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tool_statusAInspect
Get live operational status, uptime percentage, and response time for any AI tool. Checks every 5 minutes from independent infrastructure.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Tool slug — e.g. "chatgpt", "claude", "cursor", "github-copilot", "gemini" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the tool's behavior by specifying it checks 'every 5 minutes from independent infrastructure,' adding context about update frequency and data source. However, it lacks details on response format, error handling, or authentication needs, leaving gaps for a tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, consisting of two concise sentences that directly state the tool's purpose and key behavioral trait without any wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no annotations, no output schema), the description is partially complete. It covers the core purpose and some behavioral context but lacks details on output format, error cases, or integration with sibling tools, making it adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter 'slug' with examples. The description adds no additional meaning beyond what the schema provides, such as clarifying parameter usage or constraints, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and the resource ('live operational status, uptime percentage, and response time for any AI tool'), distinguishing it from siblings like get_incidents or get_rate_limits by focusing on real-time performance metrics rather than pricing, incidents, or limits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions checking 'every 5 minutes from independent infrastructure,' which hints at frequency but doesn't clarify use cases compared to siblings like get_incidents for outage details or list_tools for tool listings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_toolsAInspect
List all 42+ AI tools monitored by tickerr.ai — ChatGPT, Claude, Gemini, Cursor, GitHub Copilot, Perplexity, DeepSeek, Groq, Fireworks AI, and more.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool lists tools but does not disclose behavioral traits such as whether the list is paginated, sorted, or includes metadata like categories or descriptions. For a list operation with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key action ('List all 42+ AI tools') and provides relevant examples without waste. Every word earns its place, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It specifies what is listed but not the format or structure of the output, which is important since there's no output schema. For a list tool, more detail on return values would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, and the baseline for 0 parameters is 4, as it avoids unnecessary information while being complete for this case.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'all 42+ AI tools monitored by tickerr.ai', providing specific examples like ChatGPT, Claude, Gemini, etc. It distinguishes this tool from siblings like compare_pricing or get_tool_status by focusing on comprehensive enumeration rather than comparison, pricing, or status checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a full list of AI tools, but does not explicitly state when to use this tool versus alternatives like get_tool_status (which might check status of specific tools) or compare_pricing (which focuses on pricing comparisons). It provides context but lacks explicit guidance on exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.