Skip to main content
Glama
jmanek

google-news-trends-mcp

by jmanek

get_news_by_topic

Fetch news articles on a specific topic from Google News. Adjust time period and result count to tailor your search.

Instructions

Find articles by topic using Google News. topic is one of WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH, POLITICS, CELEBRITIES, TV, MUSIC, MOVIES, THEATER, SOCCER, CYCLING, MOTOR SPORTS, TENNIS, COMBAT SPORTS, BASKETBALL, BASEBALL, FOOTBALL, SPORTS BETTING, WATER SPORTS, HOCKEY, GOLF, CRICKET, RUGBY, ECONOMY, PERSONAL FINANCE, FINANCE, DIGITAL CURRENCIES, MOBILE, ENERGY, GAMING, INTERNET SECURITY, GADGETS, VIRTUAL REALITY, ROBOTICS, NUTRITION, PUBLIC HEALTH, MENTAL HEALTH, MEDICINE, SPACE, WILDLIFE, ENVIRONMENT, NEUROSCIENCE, PHYSICS, GEOLOGY, PALEONTOLOGY, SOCIAL SCIENCES, EDUCATION, JOBS, ONLINE EDUCATION, HIGHER EDUCATION, VEHICLES, ARTS-DESIGN, BEAUTY, FOOD, TRAVEL, SHOPPING, HOME, OUTDOORS, FASHION.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicYesTopic to search for articles.
periodNoNumber of days to look back for articles.
max_resultsNoMaximum number of results to return.
full_dataNoReturn full data for each article. If False a summary should be created by setting the summarize flag
summarizeNoGenerate a summary of the article, will first try LLM Sampling but if unavailable will use nlp

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The MCP tool handler for 'get_news_by_topic'. Registered with @mcp.tool decorator, accepts a topic string, period, max_results, full_data, and summarize flags. Delegates to news.get_news_by_topic for fetching articles, then optionally summarizes them.
    @mcp.tool(description=news.get_news_by_topic.__doc__, tags={"news", "articles", "topic"})
    async def get_news_by_topic(
        ctx: Context,
        topic: Annotated[str, Field(description="Topic to search for articles.")],
        period: Annotated[int, Field(description="Number of days to look back for articles.", ge=1)] = 7,
        max_results: Annotated[int, Field(description="Maximum number of results to return.", ge=1)] = 10,
        full_data: Annotated[
            bool,
            Field(
                description="Return full data for each article. If False a summary should be created by setting the summarize flag"
            ),
        ] = False,
        summarize: Annotated[
            bool,
            Field(
                description="Generate a summary of the article, will first try LLM Sampling but if unavailable will use nlp"
            ),
        ] = True,
    ) -> list[ArticleOut]:
        set_newspaper_article_fields(full_data)
        articles = await news.get_news_by_topic(
            topic=topic,
            period=period,
            max_results=max_results,
            nlp=False,
            report_progress=ctx.report_progress,
        )
        if summarize:
            await summarize_articles(articles, ctx)
        await ctx.report_progress(progress=len(articles), total=len(articles))
        return [ArticleOut(**a.to_json(False)) for a in articles]
  • Core business logic for 'get_news_by_topic'. Calls gnews library's get_news_by_topic(topic), then downloads and processes articles (with optional NLP). Documents the valid topic values.
    async def get_news_by_topic(
        topic: str,
        period=7,
        max_results: int = 10,
        nlp: bool = True,
        report_progress: Optional[ProgressCallback] = None,
    ) -> list[newspaper.Article]:
        """Find articles by topic using Google News.
        topic is one of
        WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH,
        POLITICS, CELEBRITIES, TV, MUSIC, MOVIES, THEATER, SOCCER, CYCLING, MOTOR SPORTS,
        TENNIS, COMBAT SPORTS, BASKETBALL, BASEBALL, FOOTBALL, SPORTS BETTING, WATER SPORTS,
        HOCKEY, GOLF, CRICKET, RUGBY, ECONOMY, PERSONAL FINANCE, FINANCE, DIGITAL CURRENCIES,
        MOBILE, ENERGY, GAMING, INTERNET SECURITY, GADGETS, VIRTUAL REALITY, ROBOTICS, NUTRITION,
        PUBLIC HEALTH, MENTAL HEALTH, MEDICINE, SPACE, WILDLIFE, ENVIRONMENT, NEUROSCIENCE, PHYSICS,
        GEOLOGY, PALEONTOLOGY, SOCIAL SCIENCES, EDUCATION, JOBS, ONLINE EDUCATION, HIGHER EDUCATION,
        VEHICLES, ARTS-DESIGN, BEAUTY, FOOD, TRAVEL, SHOPPING, HOME, OUTDOORS, FASHION.
        """
        google_news.period = f"{period}d"
        google_news.max_results = max_results
        gnews_articles = google_news.get_news_by_topic(topic)
        if not gnews_articles:
            logger.debug(f"No articles found for topic '{topic}' in the last {period} days.")
            return []
        return await process_gnews_articles(gnews_articles, nlp=nlp, report_progress=report_progress)
  • Output schema (ArticleOut) used by the tool to return structured article data to the client.
    class ArticleOut(BaseModelClean):
        title: Annotated[str, Field(description="Title of the article.")]
        url: Annotated[str, Field(description="Original article URL.")]
        read_more_link: Annotated[Optional[str], Field(description="Link to read more about the article.")] = None
        language: Annotated[Optional[str], Field(description="Language code of the article.")] = None
        meta_img: Annotated[Optional[str], Field(description="Meta image URL.")] = None
        movies: Annotated[Optional[list[str]], Field(description="List of movie URLs or IDs.")] = None
        meta_favicon: Annotated[Optional[str], Field(description="Favicon URL from meta data.")] = None
        meta_site_name: Annotated[Optional[str], Field(description="Site name from meta data.")] = None
        authors: Annotated[Optional[list[str]], Field(description="list of authors.")] = None
        publish_date: Annotated[Optional[str], Field(description="Publish date in ISO format.")] = None
        top_image: Annotated[Optional[str], Field(description="URL of the top image.")] = None
        images: Annotated[Optional[list[str]], Field(description="list of image URLs.")] = None
        text: Annotated[Optional[str], Field(description="Full text of the article.")] = None
        summary: Annotated[Optional[str], Field(description="Summary of the article.")] = None
        keywords: Annotated[Optional[list[str]], Field(description="Extracted keywords.")] = None
        tags: Annotated[Optional[list[str]], Field(description="Tags for the article.")] = None
        meta_keywords: Annotated[Optional[list[str]], Field(description="Meta keywords from the article.")] = None
        meta_description: Annotated[Optional[str], Field(description="Meta description from the article.")] = None
        canonical_link: Annotated[Optional[str], Field(description="Canonical link for the article.")] = None
        meta_data: Annotated[Optional[dict[str, str | int]], Field(description="Meta data dictionary.")] = None
        meta_lang: Annotated[Optional[str], Field(description="Language of the article.")] = None
        source_url: Annotated[Optional[str], Field(description="Source URL if different from original.")] = None
  • Registration of the 'get_news_by_topic' tool via the @mcp.tool decorator with description and tags.
    @mcp.tool(description=news.get_news_by_topic.__doc__, tags={"news", "articles", "topic"})
  • CLI command wrapper for 'get_news_by_topic' using Click, providing a command-line interface to the same functionality.
    @cli.command(help=get_news_by_topic.__doc__)
    @click.argument("topic")
    @click.option("--period", type=int, default=7, help="Period in days to search for articles.")
    @click.option(
        "--max-results",
        "max_results",
        type=int,
        default=10,
        help="Maximum number of results to return.",
    )
    @click.option("--no-nlp", is_flag=True, default=False, help="Disable NLP processing for articles.")
    def topic(topic, period, max_results, no_nlp):
        @BrowserManager()
        async def _topic():
            articles = await get_news_by_topic(topic, period=period, max_results=max_results, nlp=not no_nlp)
            print_articles(articles)
            logger.info(f"Found {len(articles)} articles for topic '{topic}'.")
    
        asyncio.run(_topic())
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It only states the tool finds articles by topic and lists topic values, with no disclosure of authentication needs, rate limits, error handling, or response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with a clear one-sentence purpose, but the lengthy list of topics makes it verbose. Grouping or referencing an enum could improve conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description need not explain return values. However, it lacks guidance on usage context and behavioral details for a tool with five parameters. Complete enough for basic use but missing higher-level context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds value by enumerating allowable topic values, which the schema only describes as a string. For other parameters, the description does not enhance beyond the schema, but the topic list is a significant aid.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Find articles by topic using Google News', identifying the verb and resource. However, it does not distinguish this tool from sibling tools like get_news_by_keyword or get_news_by_location, which also find articles but by different criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings. It does not mention prerequisites, when not to use it, or alternative tools for different query types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jmanek/google-news-trends-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server