Skip to main content
Glama
jmanek

google-news-trends-mcp

by jmanek

get_news_by_location

Retrieve and summarize Google News articles for a specific location, customizable by time period and result count, with options for detailed data or concise insights.

Instructions

Find articles by location using Google News.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
full_dataNoReturn full data for each article. If False a summary should be created by setting the summarize flag
locationYesName of city/state/country.
max_resultsNoMaximum number of results to return.
periodNoNumber of days to look back for articles.
summarizeNoGenerate a summary of the article, will first try LLM Sampling but if unavailable will use nlp

Implementation Reference

  • MCP handler function for the 'get_news_by_location' tool. It defines input parameters with Pydantic validation, fetches articles using the helper from news.py, optionally summarizes them using LLM or NLP, reports progress, and returns formatted ArticleOut objects.
    @mcp.tool( description=news.get_news_by_location.__doc__, tags={"news", "articles", "location"}, ) async def get_news_by_location( ctx: Context, location: Annotated[str, Field(description="Name of city/state/country.")], period: Annotated[int, Field(description="Number of days to look back for articles.", ge=1)] = 7, max_results: Annotated[int, Field(description="Maximum number of results to return.", ge=1)] = 10, full_data: Annotated[ bool, Field( description="Return full data for each article. If False a summary should be created by setting the summarize flag" ), ] = False, summarize: Annotated[ bool, Field( description="Generate a summary of the article, will first try LLM Sampling but if unavailable will use nlp" ), ] = True, ) -> list[ArticleOut]: set_newspaper_article_fields(full_data) articles = await news.get_news_by_location( location=location, period=period, max_results=max_results, nlp=False, report_progress=ctx.report_progress, ) if summarize: await summarize_articles(articles, ctx) await ctx.report_progress(progress=len(articles), total=len(articles)) return [ArticleOut(**a.to_json(False)) for a in articles]
  • Pydantic BaseModel used as output schema for article data in get_news_by_location and similar tools.
    class ArticleOut(BaseModelClean): title: Annotated[str, Field(description="Title of the article.")] url: Annotated[str, Field(description="Original article URL.")] read_more_link: Annotated[Optional[str], Field(description="Link to read more about the article.")] = None language: Annotated[Optional[str], Field(description="Language code of the article.")] = None meta_img: Annotated[Optional[str], Field(description="Meta image URL.")] = None movies: Annotated[Optional[list[str]], Field(description="List of movie URLs or IDs.")] = None meta_favicon: Annotated[Optional[str], Field(description="Favicon URL from meta data.")] = None meta_site_name: Annotated[Optional[str], Field(description="Site name from meta data.")] = None authors: Annotated[Optional[list[str]], Field(description="list of authors.")] = None publish_date: Annotated[Optional[str], Field(description="Publish date in ISO format.")] = None top_image: Annotated[Optional[str], Field(description="URL of the top image.")] = None images: Annotated[Optional[list[str]], Field(description="list of image URLs.")] = None text: Annotated[Optional[str], Field(description="Full text of the article.")] = None summary: Annotated[Optional[str], Field(description="Summary of the article.")] = None keywords: Annotated[Optional[list[str]], Field(description="Extracted keywords.")] = None tags: Annotated[Optional[list[str]], Field(description="Tags for the article.")] = None meta_keywords: Annotated[Optional[list[str]], Field(description="Meta keywords from the article.")] = None meta_description: Annotated[Optional[str], Field(description="Meta description from the article.")] = None canonical_link: Annotated[Optional[str], Field(description="Canonical link for the article.")] = None meta_data: Annotated[Optional[dict[str, str | int]], Field(description="Meta data dictionary.")] = None meta_lang: Annotated[Optional[str], Field(description="Language of the article.")] = None source_url: Annotated[Optional[str], Field(description="Source URL if different from original.")] = None
  • Core helper function that queries Google News API for articles by location and delegates processing to process_gnews_articles.
    async def get_news_by_location( location: str, period=7, max_results: int = 10, nlp: bool = True, report_progress: Optional[ProgressCallback] = None, ) -> list[newspaper.Article]: """Find articles by location using Google News.""" google_news.period = f"{period}d" google_news.max_results = max_results gnews_articles = google_news.get_news_by_location(location) if not gnews_articles: logger.debug(f"No articles found for location '{location}' in the last {period} days.") return [] return await process_gnews_articles(gnews_articles, nlp=nlp, report_progress=report_progress)
  • Supporting helper that downloads and parses individual articles from Google News URLs using newspaper, cloudscraper, or playwright fallback, with optional NLP.
    async def process_gnews_articles( gnews_articles: list[dict], nlp: bool = True, report_progress: Optional[ProgressCallback] = None, ) -> list[newspaper.Article]: """ Process a list of Google News articles and download them (async). Optionally report progress via report_progress callback. """ articles = [] total = len(gnews_articles) for idx, gnews_article in enumerate(gnews_articles): article = await download_article(gnews_article["url"]) if article is None or not article.text: logger.debug(f"Failed to download article from {gnews_article['url']}:\n{article}") continue article.parse() if nlp: article.nlp() articles.append(article) if report_progress: await report_progress(idx, total) return articles

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jmanek/google-news-trends-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server