newsoracle
Server Details
NewsOracle News and Trends Intelligence MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ToolOracle/newsoracle
- GitHub Stars
- 0
- Server Listing
- NewsOracle
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 9 of 9 tools scored.
Most tools have distinct purposes, but there is some overlap between 'breaking_now' (latest breaking stories) and 'top_news' (top headlines by country/topic), which could cause confusion. Other tools like 'compare_coverage', 'related_queries', and 'trend_over_time' are clearly differentiated and serve unique functions.
The naming is mixed with some tools using snake_case (e.g., 'search_news', 'trend_over_time') and others using camelCase (e.g., 'healthCheck', 'relatedQueries'), which breaks consistency. However, the names are generally readable and descriptive, with a mix of verb_noun and noun_phrase patterns.
With 9 tools, the count is well-scoped for a news analysis server, covering a range of functions from real-time updates to historical trends. Each tool appears to serve a specific purpose without feeling excessive or insufficient for the domain.
The toolset provides comprehensive coverage for news analysis, including real-time updates, search, trends, and comparisons. A minor gap is the lack of tools for user-specific features like saved searches or personalized feeds, but core workflows are well-covered without dead ends.
Available Tools
9 toolsbreaking_nowCInspect
Latest breaking/developing stories — combines trending searches + top headlines.
| Name | Required | Description | Default |
|---|---|---|---|
| country | No | Country code (default: us) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions combining trending searches and top headlines, but fails to detail operational traits like rate limits, authentication needs, data freshness, or potential side effects. This leaves significant gaps for an AI agent to understand how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It directly communicates what the tool does in a clear and structured manner, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for a tool that likely returns news data. It doesn't explain the return format, data structure, or any limitations (e.g., number of stories, time ranges). For a news retrieval tool with no structured output information, more context is needed to guide effective usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'country' parameter documented as 'Country code (default: us).' The description does not add any meaning beyond this, such as explaining how country affects results or listing supported codes. Given the high schema coverage, a baseline score of 3 is appropriate as the schema handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to retrieve 'latest breaking/developing stories' by combining 'trending searches + top headlines.' This specifies the verb (retrieve/combine) and resource (stories), though it doesn't explicitly differentiate from siblings like 'top_news' or 'trending_topics,' which may have overlapping functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'top_news,' 'trending_topics,' or 'search_news.' The description implies a focus on breaking news, but it lacks explicit instructions on context, exclusions, or comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_coverageCInspect
How different news sources cover the same story.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | News topic or event |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation but doesn't specify details like rate limits, authentication needs, output format, or whether it returns real-time or historical data. This is inadequate for a tool that likely involves complex data processing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of comparing news coverage and the lack of annotations and output schema, the description is insufficient. It doesn't explain what the tool returns, how comparisons are made, or any behavioral traits, leaving significant gaps for an agent to understand its full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'query' documented as 'News topic or event.' The description adds no additional meaning beyond this, such as examples or constraints, so it meets the baseline score of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: analyzing how different news sources cover the same story. It specifies the verb 'cover' and resource 'story' with the qualifier 'different news sources,' making it distinct from general news search tools. However, it doesn't explicitly differentiate from siblings like 'topic_deep_dive' or 'trend_over_time,' which might also involve comparative analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare it to sibling tools such as 'search_news' or 'topic_deep_dive,' leaving the agent to infer usage based on the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkBInspect
Server status, API connectivity.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. 'Server status, API connectivity' implies a read-only diagnostic operation, but doesn't specify what exactly gets checked, response format, timeout behavior, authentication requirements, or whether it performs active probing versus passive status reporting. For a diagnostic tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at just 4 words ('Server status, API connectivity'), with zero wasted language. It's front-loaded with the essential purpose and uses efficient parallel structure. Every word earns its place in communicating the diagnostic nature of the tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a diagnostic tool with no annotations and no output schema, the description is incomplete. It doesn't explain what constitutes 'status' or 'connectivity', what metrics or indicators are returned, whether this checks internal systems versus external APIs, or what the response format looks like. The agent lacks sufficient context to understand what the tool actually returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description appropriately doesn't discuss parameters since none exist. The baseline for 0 parameters is 4, as the description correctly focuses on purpose rather than input semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Server status, API connectivity' clearly states the tool's purpose - checking server and API operational status. It uses specific terms like 'status' and 'connectivity' that indicate a diagnostic function. However, it doesn't explicitly differentiate from sibling tools like 'breaking_now' or 'trending_topics' which might also involve system status checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the 8 sibling tools. There's no mention of alternatives, prerequisites, or specific contexts where this health check is appropriate versus other diagnostic or news-related tools. The agent must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_newsCInspect
Search news articles by keyword with time filter.
| Name | Required | Description | Default |
|---|---|---|---|
| when | No | Time range: 1h, 1d, 7d, 1y (default: 7d) | |
| query | No | Search query | |
| country | No | Country (default: us) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions time filtering but fails to describe critical behaviors such as pagination, rate limits, authentication needs, result format, or error handling. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool operates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It front-loads the core functionality ('search news articles') and specifies key filters ('by keyword with time filter'), making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with multiple parameters), lack of annotations, and absence of an output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., article list format), error conditions, or behavioral constraints like rate limits, leaving the agent with insufficient context for reliable use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters. The description adds minimal value beyond the schema by implying keyword-based search and time filtering, but doesn't provide additional syntax, format details, or usage examples. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('search') and resource ('news articles'), and specifies filtering criteria ('by keyword with time filter'). It distinguishes itself from siblings like 'top_news' or 'trending_topics' by emphasizing search functionality, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus sibling tools like 'top_news' (for current headlines) or 'trending_topics' (for popular subjects). It lacks explicit when/when-not instructions or named alternatives, offering only basic functional context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
topic_deep_diveCInspect
Deep analysis: articles, source diversity, interest trend over time.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Topic to analyze | |
| country | No | Country (default: us) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'deep analysis' but doesn't disclose behavioral traits such as whether this is a read-only operation, potential rate limits, data sources, or output format. This leaves significant gaps in understanding how the tool behaves beyond its basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded in a single sentence, listing key aspects (articles, source diversity, interest trend) without waste. However, it could be slightly more structured by clarifying the relationship between these elements or adding a brief introductory phrase.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (deep analysis with multiple facets), lack of annotations, and no output schema, the description is incomplete. It states what the tool does but omits critical details like output format, data sources, or limitations. This is minimally adequate but leaves clear gaps for an agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with clear descriptions for both parameters ('query' and 'country'). The description adds no additional meaning beyond the schema, such as explaining what constitutes a valid 'query' or how 'country' affects the analysis. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'deep analysis' on a topic, specifying it covers articles, source diversity, and interest trends over time. This provides a specific verb ('analysis') and resources (articles, sources, trends), though it doesn't explicitly differentiate from siblings like 'search_news' or 'trend_over_time' which might overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives. With siblings like 'search_news', 'trend_over_time', and 'compare_coverage', there's no indication of when this deep analysis is preferred over simpler searches or comparisons, leaving the agent without usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
top_newsCInspect
Top headlines by country and topic. Topics: business, technology, sports, health, science, entertainment.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | Topic: business, technology, sports, health, science, entertainment | |
| country | No | Country code (default: us) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions what the tool does but lacks critical behavioral details: whether it returns a fixed number of headlines, uses pagination, requires authentication, has rate limits, or provides timestamps. For a news retrieval tool with zero annotation coverage, this leaves significant gaps in understanding its operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: a single sentence stating the core functionality followed by a clear list of topic options. Every word earns its place with zero redundancy or unnecessary elaboration. The structure efficiently communicates the essential information without wasting tokens.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (filtered news retrieval), absence of annotations, and lack of output schema, the description is insufficiently complete. It doesn't address what the output looks like (headline list format, metadata included), how results are ordered, or any limitations (e.g., date range, source restrictions). For a tool with no structured behavioral hints, the description should provide more operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value beyond the schema: it repeats the topic options and implies country filtering but doesn't provide additional context about parameter interactions, default behaviors beyond the schema's 'default: us', or semantic meaning of the filters. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving top headlines filtered by country and topic. It specifies the resource (headlines/news) and the filtering dimensions (country, topic). However, it doesn't explicitly differentiate from siblings like 'breaking_now' or 'search_news', which likely serve different news retrieval purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'breaking_now' (likely real-time alerts), 'search_news' (likely keyword-based search), and 'trending_topics' (likely popularity-based), there's no indication of when this filtered headline approach is preferred. The description merely lists topics without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trending_topicsBInspect
What is trending right now on Google in a country.
| Name | Required | Description | Default |
|---|---|---|---|
| country | No | Country code e.g. US, DE, GB, JP (default: US) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions retrieving trending topics but doesn't specify details like data freshness (e.g., real-time vs. daily updates), rate limits, authentication requirements, or potential data sources. This leaves significant gaps in understanding how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that efficiently conveys the tool's purpose without unnecessary words. It is front-loaded with the core function, making it easy to parse quickly. There is no wasted information, and it earns its place by succinctly stating what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter) and lack of annotations or output schema, the description is adequate but incomplete. It covers the basic purpose and parameter context but misses behavioral details like response format or usage constraints. For a simple query tool, this is minimally viable but could benefit from more context on outputs or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'country' parameter clearly documented as a country code with a default of 'US'. The description adds minimal value beyond this, only reiterating the country focus without providing additional context like supported country codes or how trends might vary by region. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to retrieve trending topics on Google for a specific country. It specifies the verb ('what is trending') and resource ('on Google in a country'), making it easy to understand. However, it doesn't explicitly distinguish this tool from siblings like 'trend_over_time' or 'top_news', which might also involve trending content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance on when to use this tool, only implying it's for current trends in a country. It doesn't mention when not to use it or suggest alternatives among the sibling tools, such as using 'trend_over_time' for historical data or 'top_news' for news-specific trends. This lack of explicit comparison leaves usage context vague.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trend_over_timeCInspect
Google Trends interest over time for 1-5 keywords. Compare search interest.
| Name | Required | Description | Default |
|---|---|---|---|
| country | No | Country (default: US) | |
| keywords | No | List of 1-5 keywords to compare | |
| timeframe | No | Timeframe: 'today 3-m', 'today 12-m', 'today 5-y' (default: today 3-m) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves 'interest over time' data but lacks details on rate limits, authentication needs, data freshness, or error handling. For a tool interacting with an external API like Google Trends, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short sentences that directly state the tool's function and scope. It is front-loaded with the core purpose and avoids any unnecessary details, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (interacting with Google Trends API), lack of annotations, and no output schema, the description is incomplete. It does not explain the return format (e.g., time series data, normalization), potential limitations, or how results are structured, which is crucial for an AI agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (country, keywords, timeframe) with descriptions and defaults. The description adds minimal value by implying the keywords parameter ('1-5 keywords') but does not provide additional semantics beyond what the schema offers, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Google Trends interest over time for 1-5 keywords. Compare search interest.' It specifies the verb ('interest over time'), resource ('Google Trends'), and scope ('1-5 keywords'), but does not explicitly differentiate it from sibling tools like 'trending_topics' or 'topic_deep_dive', which might also involve trends analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Compare search interest' but does not specify contexts, exclusions, or refer to sibling tools like 'compare_coverage' or 'trending_topics' that might be relevant for similar tasks, leaving the agent to infer usage based on the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.