Skip to main content
Glama
BACH-AI-Tools

Google News22 MCP Server

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 4 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Mentions 'most popular' suggesting ranking, but omits pagination behavior (despite 'page' parameter), result set size, temporal scope (current vs. historical), and error handling.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence is efficient but contains filler words ('This endpoint lets you'). Information is front-loaded, though singular/plural confusion ('article' vs 'headlines') creates minor ambiguity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a search tool with 3 parameters and no output schema or annotations, the description is insufficient. Missing: return format, pagination details, rate limiting, and differentiation from sibling search tools.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline. Description mentions country and language constraints, aligning with schema. However, it doesn't clarify the 'page' parameter's role in pagination or explain that results are filtered by the required country/language codes.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (find) and resource (news article) with scope (most popular). However, uses singular 'article' when 'headlines' implies multiple results, and doesn't differentiate from siblings like search_by_topic_headlines or search_by_keyword.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to use this versus the three sibling search tools (search_by_geolocation, search_by_keyword, search_by_topic_headlines). No mention of prerequisites or exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Mentions 'most popular' ranking behavior, but fails to explain pagination (despite 'page' parameter), error handling, or whether it returns single or multiple articles. Does not clarify rate limits or auth requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, appropriately brief. Front-loaded with key action. Minor wordiness with 'This endpoint lets you' rather than direct 'Finds...' construction, but no redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Insufficient for a search tool with 4 parameters and no output schema. Lacks explanation of result structure, pagination behavior, and sorting methodology. Should clarify whether results are ranked by popularity and how the 'page' parameter interacts with result sets.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage (country ISO code, language code, location string, page number). Description adds minimal semantic value beyond schema, only loosely referencing 'geographical location'. Does not explain relationship between 'country' and 'location' parameters or pagination logic.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('find') and resource ('news article'), with specific scope ('geographical location'). Distinguishes from siblings by geolocation focus. However, uses implementation terminology ('endpoint') and ambiguously suggests singular result ('article') despite pagination parameter suggesting multiple results.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to select this tool versus siblings (search_by_keyword, search_by_top_headlines, search_by_topic_headlines). No mention of prerequisites or conditions where geolocation search is preferred.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description must carry the full burden of behavioral disclosure. However, it fails to mention whether the operation is read-only (implied but not stated), pagination behavior, result limits, or what happens when no articles match. It only states the tool 'allows you to filter' without explaining the filtering logic (AND/OR behavior).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single sentence that front-loads the primary action ('Find articles by keywords'). It is slightly awkward ('allows you to... to get specific result') and ends with vague filler ('specific result'), but remains appropriately brief for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a search tool with 8 parameters and no output schema or annotations, the description covers the core functionality (keyword search) and major filter categories. However, it omits pagination behavior (page/limit parameters) and provides no hints about the return format or result structure, leaving significant gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents all 8 parameters including detailed query syntax for the 'q' parameter and ISO codes for country/language. The description adds marginal value by confirming the filterable fields but does not provide syntax guidance beyond what the schema already contains, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Find[s] articles by keywords' and lists the available filters (country, language, source, date). This distinguishes it from sibling tools like search_by_geolocation and search_by_top_headlines by emphasizing the keyword-based search mechanism, though it does not explicitly name the alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus its siblings (search_by_geolocation, search_by_top_headlines, search_by_topic_headlines). There are no prerequisites, exclusions, or conditional usage scenarios mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden but provides minimal behavioral context. It mentions 'most popular' (ranking behavior) but omits access tier restrictions, pagination behavior (despite the page parameter), date filtering logic, rate limits, or what happens when no results exist. The schema reveals access levels that the description should highlight.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, front-loaded sentence with no wasted words, though 'This endpoint lets you find' is slightly passive compared to direct action verbs like 'Finds' or 'Retrieves'. The parenthetical examples are appropriately concise.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 5 parameters (2 optional), no annotations, and no output schema, the description provides minimum viable context for basic invocation but leaves gaps. It does not explain the optional date/page parameters' behavior, the access tier limitations visible in the schema, or the return format (e.g., whether it returns full articles or just headlines).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description mentions the three required parameters (country, language, topic) aligning with the schema, but adds no semantic detail beyond the schema for the optional date and page parameters (which have empty example values in the schema). It does not explain the ISO code formats or pagination syntax.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool finds 'the most popular news article' (specific verb+resource) filtered by country, language, and topic. It distinguishes from siblings by explicitly mentioning 'topic' (contrasting with geolocation, keyword, and top_headlines variants), though the singular 'article' slightly mismatches the plural 'headlines' in the tool name.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides topic examples ('like sports or entertainment') which imply usage, but offers no explicit guidance on when to prefer this tool over search_by_keyword or search_by_top_headlines. It fails to mention the access level restrictions (Basic/Pro/Ultra/Mega) detailed in the topic parameter schema, which is critical for usage decisions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

bachai-google-news22 MCP server

Copy to your README.md:

Score Badge

bachai-google-news22 MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BACH-AI-Tools/bachai-google-news22'

If you have feedback or need assistance with the MCP directory API, please join our Discord server