Skip to main content
Glama
jmanek

google-news-trends-mcp

by jmanek

get_top_news

Retrieve top news stories from Google News by specifying days to search, results count, and whether to include full data or summaries for concise insights.

Instructions

Get top news stories from Google News.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
full_dataNoReturn full data for each article. If False a summary should be created by setting the summarize flag
max_resultsNoMaximum number of results to return.
periodNoNumber of days to look back for top articles.
summarizeNoGenerate a summary of the article, will first try LLM Sampling but if unavailable will use nlp

Implementation Reference

  • MCP handler implementation for the 'get_top_news' tool. It delegates to the news.get_top_news helper, optionally summarizes articles using LLM or NLP, and serializes results into ArticleOut models.
    @mcp.tool(description=news.get_top_news.__doc__, tags={"news", "articles", "top"}) async def get_top_news( ctx: Context, period: Annotated[int, Field(description="Number of days to look back for top articles.", ge=1)] = 3, max_results: Annotated[int, Field(description="Maximum number of results to return.", ge=1)] = 10, full_data: Annotated[ bool, Field( description="Return full data for each article. If False a summary should be created by setting the summarize flag" ), ] = False, summarize: Annotated[ bool, Field( description="Generate a summary of the article, will first try LLM Sampling but if unavailable will use nlp" ), ] = True, ) -> list[ArticleOut]: set_newspaper_article_fields(full_data) articles = await news.get_top_news( period=period, max_results=max_results, nlp=False, report_progress=ctx.report_progress, ) if summarize: await summarize_articles(articles, ctx) await ctx.report_progress(progress=len(articles), total=len(articles)) return [ArticleOut(**a.to_json(False)) for a in articles]
  • Pydantic model defining the output schema for article data returned by get_top_news.
    class ArticleOut(BaseModelClean): title: Annotated[str, Field(description="Title of the article.")] url: Annotated[str, Field(description="Original article URL.")] read_more_link: Annotated[Optional[str], Field(description="Link to read more about the article.")] = None language: Annotated[Optional[str], Field(description="Language code of the article.")] = None meta_img: Annotated[Optional[str], Field(description="Meta image URL.")] = None movies: Annotated[Optional[list[str]], Field(description="List of movie URLs or IDs.")] = None meta_favicon: Annotated[Optional[str], Field(description="Favicon URL from meta data.")] = None meta_site_name: Annotated[Optional[str], Field(description="Site name from meta data.")] = None authors: Annotated[Optional[list[str]], Field(description="list of authors.")] = None publish_date: Annotated[Optional[str], Field(description="Publish date in ISO format.")] = None top_image: Annotated[Optional[str], Field(description="URL of the top image.")] = None images: Annotated[Optional[list[str]], Field(description="list of image URLs.")] = None text: Annotated[Optional[str], Field(description="Full text of the article.")] = None summary: Annotated[Optional[str], Field(description="Summary of the article.")] = None keywords: Annotated[Optional[list[str]], Field(description="Extracted keywords.")] = None tags: Annotated[Optional[list[str]], Field(description="Tags for the article.")] = None meta_keywords: Annotated[Optional[list[str]], Field(description="Meta keywords from the article.")] = None meta_description: Annotated[Optional[str], Field(description="Meta description from the article.")] = None canonical_link: Annotated[Optional[str], Field(description="Canonical link for the article.")] = None meta_data: Annotated[Optional[dict[str, str | int]], Field(description="Meta data dictionary.")] = None meta_lang: Annotated[Optional[str], Field(description="Language of the article.")] = None source_url: Annotated[Optional[str], Field(description="Source URL if different from original.")] = None
  • Helper function implementing the core logic to fetch top news articles from Google News using the GNews client and process them into newspaper.Article objects.
    async def get_top_news( period: int = 3, max_results: int = 10, nlp: bool = True, report_progress: Optional[ProgressCallback] = None, ) -> list[newspaper.Article]: """ Get top news stories from Google News. """ google_news.period = f"{period}d" google_news.max_results = max_results gnews_articles = google_news.get_top_news() if not gnews_articles: logger.debug("No top news articles found.") return [] return await process_gnews_articles(gnews_articles, nlp=nlp, report_progress=report_progress)
  • Helper function to download and parse Google News articles into newspaper.Article instances, with optional NLP processing.
    async def process_gnews_articles( gnews_articles: list[dict], nlp: bool = True, report_progress: Optional[ProgressCallback] = None, ) -> list[newspaper.Article]: """ Process a list of Google News articles and download them (async). Optionally report progress via report_progress callback. """ articles = [] total = len(gnews_articles) for idx, gnews_article in enumerate(gnews_articles): article = await download_article(gnews_article["url"]) if article is None or not article.text: logger.debug(f"Failed to download article from {gnews_article['url']}:\n{article}") continue article.parse() if nlp: article.nlp() articles.append(article) if report_progress: await report_progress(idx, total) return articles

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jmanek/google-news-trends-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server