Skip to main content
Glama

newsoracle

Server Details

NewsOracle News and Trends Intelligence MCP

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/newsoracle
GitHub Stars
0
Server Listing
NewsOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between 'breaking_now' (latest breaking stories) and 'top_news' (top headlines by country/topic), which could cause confusion. Other tools like 'compare_coverage', 'related_queries', and 'trend_over_time' are clearly differentiated and serve unique functions.

Naming Consistency3/5

The naming is mixed with some tools using snake_case (e.g., 'search_news', 'trend_over_time') and others using camelCase (e.g., 'healthCheck', 'relatedQueries'), which breaks consistency. However, the names are generally readable and descriptive, with a mix of verb_noun and noun_phrase patterns.

Tool Count5/5

With 9 tools, the count is well-scoped for a news analysis server, covering a range of functions from real-time updates to historical trends. Each tool appears to serve a specific purpose without feeling excessive or insufficient for the domain.

Completeness4/5

The toolset provides comprehensive coverage for news analysis, including real-time updates, search, trends, and comparisons. A minor gap is the lack of tools for user-specific features like saved searches or personalized feeds, but core workflows are well-covered without dead ends.

Available Tools

9 tools
breaking_nowCInspect

Latest breaking/developing stories — combines trending searches + top headlines.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry code (default: us)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions combining trending searches and top headlines, but fails to detail operational traits like rate limits, authentication needs, data freshness, or potential side effects. This leaves significant gaps for an AI agent to understand how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It directly communicates what the tool does in a clear and structured manner, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for a tool that likely returns news data. It doesn't explain the return format, data structure, or any limitations (e.g., number of stories, time ranges). For a news retrieval tool with no structured output information, more context is needed to guide effective usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'country' parameter documented as 'Country code (default: us).' The description does not add any meaning beyond this, such as explaining how country affects results or listing supported codes. Given the high schema coverage, a baseline score of 3 is appropriate as the schema handles the parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to retrieve 'latest breaking/developing stories' by combining 'trending searches + top headlines.' This specifies the verb (retrieve/combine) and resource (stories), though it doesn't explicitly differentiate from siblings like 'top_news' or 'trending_topics,' which may have overlapping functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'top_news,' 'trending_topics,' or 'search_news.' The description implies a focus on breaking news, but it lacks explicit instructions on context, exclusions, or comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_coverageCInspect

How different news sources cover the same story.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoNews topic or event
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation but doesn't specify details like rate limits, authentication needs, output format, or whether it returns real-time or historical data. This is inadequate for a tool that likely involves complex data processing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of comparing news coverage and the lack of annotations and output schema, the description is insufficient. It doesn't explain what the tool returns, how comparisons are made, or any behavioral traits, leaving significant gaps for an agent to understand its full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'query' documented as 'News topic or event.' The description adds no additional meaning beyond this, such as examples or constraints, so it meets the baseline score of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: analyzing how different news sources cover the same story. It specifies the verb 'cover' and resource 'story' with the qualifier 'different news sources,' making it distinct from general news search tools. However, it doesn't explicitly differentiate from siblings like 'topic_deep_dive' or 'trend_over_time,' which might also involve comparative analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or compare it to sibling tools such as 'search_news' or 'topic_deep_dive,' leaving the agent to infer usage based on the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkBInspect

Server status, API connectivity.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. 'Server status, API connectivity' implies a read-only diagnostic operation, but doesn't specify what exactly gets checked, response format, timeout behavior, authentication requirements, or whether it performs active probing versus passive status reporting. For a diagnostic tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just 4 words ('Server status, API connectivity'), with zero wasted language. It's front-loaded with the essential purpose and uses efficient parallel structure. Every word earns its place in communicating the diagnostic nature of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a diagnostic tool with no annotations and no output schema, the description is incomplete. It doesn't explain what constitutes 'status' or 'connectivity', what metrics or indicators are returned, whether this checks internal systems versus external APIs, or what the response format looks like. The agent lacks sufficient context to understand what the tool actually returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description appropriately doesn't discuss parameters since none exist. The baseline for 0 parameters is 4, as the description correctly focuses on purpose rather than input semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Server status, API connectivity' clearly states the tool's purpose - checking server and API operational status. It uses specific terms like 'status' and 'connectivity' that indicate a diagnostic function. However, it doesn't explicitly differentiate from sibling tools like 'breaking_now' or 'trending_topics' which might also involve system status checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the 8 sibling tools. There's no mention of alternatives, prerequisites, or specific contexts where this health check is appropriate versus other diagnostic or news-related tools. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_newsCInspect

Search news articles by keyword with time filter.

ParametersJSON Schema
NameRequiredDescriptionDefault
whenNoTime range: 1h, 1d, 7d, 1y (default: 7d)
queryNoSearch query
countryNoCountry (default: us)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions time filtering but fails to describe critical behaviors such as pagination, rate limits, authentication needs, result format, or error handling. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool operates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It front-loads the core functionality ('search news articles') and specifies key filters ('by keyword with time filter'), making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with multiple parameters), lack of annotations, and absence of an output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., article list format), error conditions, or behavioral constraints like rate limits, leaving the agent with insufficient context for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters. The description adds minimal value beyond the schema by implying keyword-based search and time filtering, but doesn't provide additional syntax, format details, or usage examples. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('search') and resource ('news articles'), and specifies filtering criteria ('by keyword with time filter'). It distinguishes itself from siblings like 'top_news' or 'trending_topics' by emphasizing search functionality, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus sibling tools like 'top_news' (for current headlines) or 'trending_topics' (for popular subjects). It lacks explicit when/when-not instructions or named alternatives, offering only basic functional context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

topic_deep_diveCInspect

Deep analysis: articles, source diversity, interest trend over time.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoTopic to analyze
countryNoCountry (default: us)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'deep analysis' but doesn't disclose behavioral traits such as whether this is a read-only operation, potential rate limits, data sources, or output format. This leaves significant gaps in understanding how the tool behaves beyond its basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded in a single sentence, listing key aspects (articles, source diversity, interest trend) without waste. However, it could be slightly more structured by clarifying the relationship between these elements or adding a brief introductory phrase.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (deep analysis with multiple facets), lack of annotations, and no output schema, the description is incomplete. It states what the tool does but omits critical details like output format, data sources, or limitations. This is minimally adequate but leaves clear gaps for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with clear descriptions for both parameters ('query' and 'country'). The description adds no additional meaning beyond the schema, such as explaining what constitutes a valid 'query' or how 'country' affects the analysis. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'deep analysis' on a topic, specifying it covers articles, source diversity, and interest trends over time. This provides a specific verb ('analysis') and resources (articles, sources, trends), though it doesn't explicitly differentiate from siblings like 'search_news' or 'trend_over_time' which might overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. With siblings like 'search_news', 'trend_over_time', and 'compare_coverage', there's no indication of when this deep analysis is preferred over simpler searches or comparisons, leaving the agent without usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

top_newsCInspect

Top headlines by country and topic. Topics: business, technology, sports, health, science, entertainment.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoTopic: business, technology, sports, health, science, entertainment
countryNoCountry code (default: us)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions what the tool does but lacks critical behavioral details: whether it returns a fixed number of headlines, uses pagination, requires authentication, has rate limits, or provides timestamps. For a news retrieval tool with zero annotation coverage, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: a single sentence stating the core functionality followed by a clear list of topic options. Every word earns its place with zero redundancy or unnecessary elaboration. The structure efficiently communicates the essential information without wasting tokens.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (filtered news retrieval), absence of annotations, and lack of output schema, the description is insufficiently complete. It doesn't address what the output looks like (headline list format, metadata included), how results are ordered, or any limitations (e.g., date range, source restrictions). For a tool with no structured behavioral hints, the description should provide more operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value beyond the schema: it repeats the topic options and implies country filtering but doesn't provide additional context about parameter interactions, default behaviors beyond the schema's 'default: us', or semantic meaning of the filters. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving top headlines filtered by country and topic. It specifies the resource (headlines/news) and the filtering dimensions (country, topic). However, it doesn't explicitly differentiate from siblings like 'breaking_now' or 'search_news', which likely serve different news retrieval purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'breaking_now' (likely real-time alerts), 'search_news' (likely keyword-based search), and 'trending_topics' (likely popularity-based), there's no indication of when this filtered headline approach is preferred. The description merely lists topics without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trend_over_timeCInspect

Google Trends interest over time for 1-5 keywords. Compare search interest.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry (default: US)
keywordsNoList of 1-5 keywords to compare
timeframeNoTimeframe: 'today 3-m', 'today 12-m', 'today 5-y' (default: today 3-m)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves 'interest over time' data but lacks details on rate limits, authentication needs, data freshness, or error handling. For a tool interacting with an external API like Google Trends, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two short sentences that directly state the tool's function and scope. It is front-loaded with the core purpose and avoids any unnecessary details, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (interacting with Google Trends API), lack of annotations, and no output schema, the description is incomplete. It does not explain the return format (e.g., time series data, normalization), potential limitations, or how results are structured, which is crucial for an AI agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (country, keywords, timeframe) with descriptions and defaults. The description adds minimal value by implying the keywords parameter ('1-5 keywords') but does not provide additional semantics beyond what the schema offers, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Google Trends interest over time for 1-5 keywords. Compare search interest.' It specifies the verb ('interest over time'), resource ('Google Trends'), and scope ('1-5 keywords'), but does not explicitly differentiate it from sibling tools like 'trending_topics' or 'topic_deep_dive', which might also involve trends analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'Compare search interest' but does not specify contexts, exclusions, or refer to sibling tools like 'compare_coverage' or 'trending_topics' that might be relevant for similar tasks, leaving the agent to infer usage based on the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.