NewzAI MCP server
Server Details
The only News based AI MCP your agents will ever need — custom categories, global regions, and time-scoped results in one tool. We use multi-vector & sparse-hybrid search to search through thousands of articles across the world to find the exact news you're looking for.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.1/5 across 3 of 3 tools scored.
While fetch_news_headlines serves a distinct purpose, search_news_for_any_category and search_news_predefined_category overlap significantly; agents cannot easily determine whether a category is 'predefined' or 'custom' without external knowledge, creating selection ambiguity.
Mixed verb usage ('fetch' vs 'search') creates inconsistency, though all tools use snake_case. The suffix patterns vary ('headlines' vs 'for_any_category' vs 'predefined_category'), lacking a uniform structural pattern.
Three tools is minimal but reasonable for a focused news retrieval service. However, the redundancy between the two category search tools suggests the set could be consolidated without losing functionality.
Covers regional headlines and category browsing but lacks keyword-based search and detailed article retrieval. Additionally, there is no tool to list available predefined categories, making the predefined_category tool difficult to use effectively.
Available Tools
3 toolsfetch_news_headlinesBInspect
Fetches the latest news headlines based on region. Use language for output language
| Name | Required | Description | Default |
|---|---|---|---|
| region | Yes | Region from where you request custom news | |
| language | No | Language in which you request custom news | en |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure, but it fails to mention critical traits such as rate limits, caching behavior, data freshness, or whether this is a read-only operation. The description only covers the basic functional purpose without addressing operational characteristics. With no indication of API limits or quota consumption, the behavioral profile remains largely undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no redundant or wasted words, immediately stating the purpose in the first sentence and parameter guidance in the second. The structure appropriately front-loads the core functionality. This level of brevity is well-suited for a simple two-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers the basic invocation pattern for a simple headline fetching tool, and the existence of an output schema means return values need not be described. However, given the lack of annotations and the presence of sibling tools with overlapping domains, the description should provide more context about selection criteria and operational constraints to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with both region and language parameters fully documented in the schema itself, establishing a baseline score per the rubric. The description adds minimal value beyond the schema, though it does clarify that the language parameter controls the 'output language' which slightly reinforces the schema's 'request custom news' description. Given the complete schema coverage, the description does not need to compensate for missing parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Fetches the latest news headlines' with specific scope 'based on region', providing a clear verb and resource. However, it does not explicitly differentiate from the category-based sibling tools (search_news_for_any_category, search_news_predefined_category) or explain when to prefer this tool over them. The mention of 'region' implies the distinction but lacks explicit guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus the sibling search tools, nor does it mention prerequisites or conditions. It only mentions the language parameter usage without contextual selection criteria. Users cannot determine from this description whether to use fetch_news_headlines or search_news_for_any_category for a given news retrieval task.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_news_for_any_categoryBInspect
Fetches news for a custom category with detailed information. Specify the region, output language and the top k
| Name | Required | Description | Default |
|---|---|---|---|
| top_k | No | Number of news items to fetch | |
| region | No | Region from where one want to fetch news | india |
| category | Yes | Custom category/topic to search | |
| language | No | Language code in which you request custom news | en |
| last_n_hours | No | Time range in hours to fetch recent news, e.g., 24 for news from the last 24 hours |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'detailed information' regarding output, but fails to disclose critical behavioral traits like rate limits, authentication requirements, what happens if the category returns no results, or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short sentences with the primary purpose front-loaded. However, the second sentence is largely redundant given the comprehensive schema documentation, slightly reducing its value density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with full schema coverage but no output schema or annotations, the description meets minimum requirements but lacks completeness. It should ideally describe the output format or what 'detailed information' entails to compensate for the missing output schema and annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline score of 3. The description's second sentence ('Specify the region, output language and the top k') essentially repeats parameter names without adding semantic context (e.g., that 'category' accepts free-form text while 'region' accepts only specific enum values).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Fetches') and resource ('news') and clarifies the scope is for a 'custom category', which distinguishes it from the sibling 'search_news_predefined_category'. However, it could explicitly name the sibling tools to make the differentiation sharper.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'custom category' implies when to use this tool (for arbitrary topics vs. predefined ones), but there are no explicit when-to-use guidelines, exclusions, or mentions of the sibling alternatives ('fetch_news_headlines', 'search_news_predefined_category') to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_news_predefined_categoryCInspect
Fetches news for a predefined category with detailed information. Specify the region, output language and the top k
| Name | Required | Description | Default |
|---|---|---|---|
| top_k | No | Number of news items to fetch | |
| region | No | Region from where one want to fetch news in the predefined category | india |
| language | No | Language in which you request custom news | en |
| predefined_category | Yes | Predefined category to search (e.g., TECHNOLOGY, BUSINESS) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions 'detailed information' (suggesting depth of returned content), it fails to disclose other critical behavioral traits such as rate limits, authentication requirements, whether results are real-time or cached, or the structure of returned data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences. The first sentence front-loads the core purpose and value proposition ('detailed information'), while the second sentence previews the key parameters. There is minimal redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description partially compensates by mentioning 'detailed information' to characterize return values. However, it remains vague about the actual return structure (e.g., article fields, timestamps) and, combined with no annotations, leaves gaps in understanding the tool's full operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters (region, language, top_k, predefined_category). The description merely lists three of these parameters without adding semantic value, syntax details, or usage examples beyond what the schema provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Fetches news for a predefined category' with 'detailed information', using specific verbs and resources. It implicitly distinguishes from sibling 'search_news_for_any_category' by emphasizing 'predefined' and from 'fetch_news_headlines' by promising 'detailed information' rather than just headlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus its siblings (fetch_news_headlines or search_news_for_any_category). There are no 'when-to-use' conditions, prerequisites, or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!