ngram_frequencies
Analyze word frequency trends over time in Norwegian books and newspapers using NGram data from the National Library of Norway's Digital Humanities Lab.
Instructions
Get word frequency trends over time using NGram analysis.
Args: words: List of words to analyze corpus: Corpus type. Options: 'bok' (books), 'avis' (newspapers). Default: 'bok' from_year: Start year (default: 1810) to_year: End year (default: 2020) smooth: Smoothing parameter for the frequency curve (default: 1)
Returns: JSON string containing frequency data over time
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| words | Yes | ||
| corpus | No | bok | |
| from_year | No | ||
| to_year | No | ||
| smooth | No |
Implementation Reference
- src/dhlab_mcp/server.py:54-84 (handler)The primary handler function for the 'ngram_frequencies' tool. It uses dhlab.NgramBook or NgramNews to compute frequency trends over time for given words in specified corpus and year range. The @mcp.tool() decorator registers it as an MCP tool. Input schema is defined by type annotations and docstring.@mcp.tool() def ngram_frequencies( words: list[str], corpus: str = "bok", from_year: int = 1810, to_year: int = 2020, smooth: int = 1, ) -> str: """Get word frequency trends over time using NGram analysis. Args: words: List of words to analyze corpus: Corpus type. Options: 'bok' (books), 'avis' (newspapers). Default: 'bok' from_year: Start year (default: 1810) to_year: End year (default: 2020) smooth: Smoothing parameter for the frequency curve (default: 1) Returns: JSON string containing frequency data over time """ try: if corpus == "avis": ng = dhlab.NgramNews(words, from_year=from_year, to_year=to_year, smooth=smooth) else: ng = dhlab.NgramBook(words, from_year=from_year, to_year=to_year, smooth=smooth) if hasattr(ng, 'frame') and ng.frame is not None: return ng.frame.to_json(orient='index', force_ascii=False) return "No frequency data available" except Exception as e: return f"Error getting ngram frequencies: {str(e)}"