Skip to main content
Glama

NewzAI News MCP server

Server Details

The only News based AI MCP your agents will ever need — custom categories, global regions, and time-scoped results in one tool. We use multi-vector & sparse-hybrid search to search through thousands of articles across the world to find the exact news you're looking for.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 7 of 7 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: get_news_by_category and get_news_by_preference differ in input type (category vs. preference ID), get_news_headlines provides limited data, get_related_news focuses on similarity, get_user_preferences and set_user_preferences handle user data, and search_news allows free-form queries. However, get_news_by_category and search_news could be slightly confused as both retrieve full news details with filters, though their input methods differ (predefined category vs. keyword).

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case, using clear verbs like 'get', 'search', and 'set'. The naming is predictable and readable, making it easy for agents to understand the action and target resource without confusion.

Tool Count5/5

With 7 tools, this server is well-scoped for a news domain, covering core operations like retrieving news by various methods, managing user preferences, and searching. The count is appropriate, avoiding bloat while providing enough functionality for typical agent workflows without feeling thin or overwhelming.

Completeness4/5

The tool set covers key aspects of news retrieval and user preference management, including CRUD-like operations for preferences (get and set) and multiple news-fetching methods. A minor gap is the lack of a tool to delete or update individual user preferences, which might require workarounds, but core workflows like reading news and saving preferences are well-supported.

Available Tools

7 tools
get_news_by_categoryBInspect

Fetches news for a predefined category with full details including title, source, summary, age, card_url, and source_url. Specify the predefined category for which you want the news, the region of source of news and the OUTPUT language of news. Note: Language is not a filter.

ParametersJSON Schema
NameRequiredDescriptionDefault
top_kNoNumber of news items to fetch
regionNoRegion from where one want to get news of the predefined categoryindia
languageNoLanguage of the output of the news content. It is not a filter, news in other languages may also be included if output language is differenten
predefined_categoryYesPredefined category to search (e.g., TECHNOLOGY, BUSINESS)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers limited behavioral insight. It mentions that language is not a filter and news may include other languages, adding some context. However, it doesn't cover critical aspects like rate limits, authentication needs, pagination, or error handling, leaving significant gaps for a tool with multiple parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences, front-loaded with the core purpose. The note about language adds necessary clarification without redundancy. However, the first sentence is slightly verbose with the list of fields, which could be streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, 100% schema coverage, and presence of an output schema, the description is reasonably complete. It covers the purpose and key behavioral note about language. With output schema handling return values, the description doesn't need to explain outputs, making it adequate for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal value by reiterating that language is not a filter and specifying output language, but doesn't provide additional meaning beyond what's in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('fetches news') and resource ('for a predefined category'), specifying the detailed fields returned. It distinguishes from 'get_news_headlines' by mentioning 'full details' and from 'search_news' by focusing on categories rather than search queries. However, it doesn't explicitly name these siblings for differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying inputs like category, region, and language, but doesn't explicitly state when to use this tool versus alternatives like 'get_news_headlines' or 'search_news'. It provides some context with the note about language not being a filter, but lacks clear when/when-not guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_news_by_preferenceAInspect

Fetches news for a specific saved user preference identified by its ID. The preference defines the category, region, and language of news to retrieve. Use get_user_preferences first to obtain valid preference IDs. Login is required to access this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
preference_idYesID of the saved user preference to fetch news for

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses that login is required (an auth need) and explains the dependency on get_user_preferences for obtaining IDs. However, it lacks details on rate limits, pagination, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by two concise sentences adding critical usage and behavioral context, with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and an output schema (which handles return values), the description is mostly complete: it covers purpose, usage prerequisites, and auth needs. However, it could improve by mentioning potential limitations like result count or time ranges.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the parameter. The description adds minimal value by mentioning that the preference defines 'category, region, and language of news to retrieve,' but this doesn't provide syntax or format details beyond the schema's 'ID of the saved user preference.'

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('fetches news') and resource ('for a specific saved user preference identified by its ID'), distinguishing it from siblings like get_news_by_category, get_news_headlines, and search_news by emphasizing the preference-based retrieval mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly provides when-to-use guidance by stating 'Use get_user_preferences first to obtain valid preference IDs' and 'Login is required to access this tool,' clearly differentiating it from alternatives that might not require login or use different parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_news_headlinesAInspect

Fetches the latest headlines (title and source only, no summary) for a specified region. Use get_news_by_category or search_news when full article details like summary or source URL are needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionYesRegion from where you request custom news
languageNoOutput language of the news content. Not a filter — news in other languages may be included.en

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the output format ('title and source only, no summary') and regional filtering, but does not mention rate limits, authentication needs, pagination, or error handling. It adds some context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste. The first sentence states the purpose and key limitation, and the second provides explicit usage guidance. Every word earns its place, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (so return values need not be explained), the description covers purpose, limitations, and usage guidelines well. However, as a tool with no annotations, it could benefit from more behavioral context like rate limits or error handling. It is largely complete but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what the schema provides, such as explaining the relationship between region and language or clarifying edge cases. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('fetches'), resource ('latest headlines'), and scope ('title and source only, no summary'), distinguishing it from sibling tools. It explicitly mentions what it does not provide (full article details), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('for a specified region') and when to use alternatives ('Use get_news_by_category or search_news when full article details... are needed'). It provides clear guidance on tool selection based on output needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_user_preferencesAInspect

Returns all saved news preferences for the authenticated user. Each preference contains a news category, region, output language, and a daily refresh time. Login is required to access this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a read operation ('Returns'), requires authentication ('Login is required'), and specifies the data structure returned. However, it lacks details on error handling, rate limits, or response format beyond the preference fields listed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by essential details in a second sentence. Every sentence adds value: the first defines the action and scope, the second specifies the data structure, and the third states authentication requirements, with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no annotations, but with an output schema), the description is mostly complete. It covers purpose, data returned, and authentication needs. However, it could benefit from mentioning the output schema's role or potential limitations, though the presence of an output schema reduces the need for detailed return value explanations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the description has no parameters to document. It appropriately focuses on the tool's purpose and behavior without redundant parameter information, earning a baseline score above 3 for compensating with clear context in a parameter-less tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Returns all saved news preferences') and resource ('for the authenticated user'), with explicit details about what each preference contains. It distinguishes this tool from siblings like 'get_news_by_category' or 'set_user_preferences' by focusing on user preferences rather than news content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Login is required to access this tool'), but it does not explicitly state when not to use it or name specific alternatives. It implies usage for retrieving saved preferences rather than news data, which helps differentiate from sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_newsAInspect

Searches news for a free-form topic or keyword and returns full details including summary, age, and source URL. Specify the region filter, output language, and top_k. Note: Language is the output language, not a filter — news in other languages may also be included.

ParametersJSON Schema
NameRequiredDescriptionDefault
top_kNoNumber of news items to fetch
regionNoRegion filter for the news searchindia
languageNoLanguage code of the output news content, e.g., 'en' for English. It is not a filter — news in other languages may also be included.en
search_typeNoUse Hybrid search for semantic + keyword search. Select Sparse for keyword-prefered search and Vector for semantic-preferred search. Use Vector search for languages other than Englishhybrid
enable_decayNoWhether to apply decay for increasing relevance of recent news
last_n_hoursNoTime range in hours to fetch recent news, e.g., 24 for news from the last 24 hours
search_stringYesFree form search string/topic to search for news (e.g., 'latest in AI', '2024 Olympics')

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and adds valuable behavioral context: it clarifies that language is for output only (not a filter), mentions that news in other languages may be included, and describes the return format ('full details including summary, age, and source URL'). However, it doesn't address rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: the first states purpose and return format, the second clarifies parameter usage. It's front-loaded with core functionality. Minor room for improvement in flow, but overall efficient with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, 100% schema coverage, and an output schema (implied by context signals), the description provides adequate context for a search tool. It covers key behavioral aspects (language handling, return format) but could better address when to use versus siblings or advanced search behaviors. The presence of output schema reduces need to describe return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal semantic value beyond the schema—it mentions region filter, output language, and top_k but doesn't provide additional context or examples not already in parameter descriptions. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('searches news') and resources ('full details including summary, age, and source URL'). It distinguishes from sibling tools (get_news_by_category, get_news_headlines) by emphasizing free-form topic/keyword search rather than categorical or headline-based retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through its parameter explanations (e.g., 'Specify the region filter, output language, and top_k') but doesn't explicitly state when to use this tool versus alternatives like get_news_by_category or get_news_headlines. No clear exclusions or comparative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_user_preferencesAInspect

Adds one or more news preferences for the authenticated user. Each preference specifies a news category, region, output language, and a daily refresh time. Existing preferences are not removed — new ones are appended. Login is required to access this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
preferencesYesList of preferences to add for the user

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the mutation behavior ('Adds... preferences'), clarifies that it's an append operation rather than a replacement, and states the authentication requirement ('Login is required'). It doesn't mention rate limits, error conditions, or response format, but covers the essential behavioral traits for this type of operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly sized and front-loaded with the core purpose in the first sentence. Each subsequent sentence adds essential information about the operation's behavior and requirements without any wasted words. The three sentences each earn their place by providing distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there's an output schema (though not shown here), the description doesn't need to explain return values. It covers the mutation nature, authentication requirement, and append behavior well. For a tool with 100% schema coverage and output schema, the description provides adequate context, though it could potentially mention success/failure indicators.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already fully documents the single parameter and its nested structure. The description mentions what each preference contains but doesn't add syntax or format details beyond what the schema provides. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Adds one or more news preferences'), the resource ('for the authenticated user'), and the scope ('Each preference specifies a news category, region, output language, and a daily refresh time'). It explicitly distinguishes this from sibling tools like 'get_user_preferences' by focusing on addition rather than retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Login is required to access this tool') and clarifies the behavior ('Existing preferences are not removed — new ones are appended'), which helps differentiate it from potential alternatives like update or replace operations. However, it doesn't explicitly name when-not-to-use scenarios or mention specific sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources