Skip to main content
Glama
dgalarza

YNAB MCP Server

by dgalarza

compare_spending_by_year

Analyze category spending trends across multiple years to identify patterns and track budget performance over time with visual comparisons.

Instructions

Compare spending for a category across multiple years.

Args:
    budget_id: The ID of the budget (use 'last-used' for default budget)
    category_id: The category ID to analyze
    start_year: Starting year (e.g., 2020)
    num_years: Number of years to compare (default: 5)
    include_graph: Include terminal graph visualization (default: True)

Returns:
    JSON string with year-over-year comparison including totals, changes, percentage changes, and optional graph

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
budget_idYes
category_idYes
include_graphNo
num_yearsNo
start_yearYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool registration for compare_spending_by_year. Thin wrapper that gets YNABClient singleton and calls its method, returning JSON.
    @mcp.tool()
    async def compare_spending_by_year(
        budget_id: str,
        category_id: str,
        start_year: int,
        num_years: int = 5,
        include_graph: bool = True,
    ) -> str:
        """Compare spending for a category across multiple years.
    
        Args:
            budget_id: The ID of the budget (use 'last-used' for default budget)
            category_id: The category ID to analyze
            start_year: Starting year (e.g., 2020)
            num_years: Number of years to compare (default: 5)
            include_graph: Include terminal graph visualization (default: True)
    
        Returns:
            JSON string with year-over-year comparison including totals, changes, percentage changes, and optional graph
        """
        client = get_ynab_client()
        result = await client.compare_spending_by_year(
            budget_id, category_id, start_year, num_years, include_graph
        )
        return json.dumps(result, indent=2)
  • Core handler implementation in YNABClient. Fetches transactions via YNAB API since start_year, filters by category_id and date range, aggregates yearly spending totals, computes year-over-year absolute and percentage changes, calculates average, and optionally generates a terminal graph using _generate_graph.
    async def compare_spending_by_year(
        self,
        budget_id: str,
        category_id: str,
        start_year: int,
        num_years: int = 5,
        include_graph: bool = True,
    ) -> dict[str, Any]:
        """Compare spending for a category across multiple years.
    
        Args:
            budget_id: The budget ID or 'last-used'
            category_id: The category ID to analyze
            start_year: Starting year (e.g., 2020)
            num_years: Number of years to compare (default: 5)
            include_graph: Include terminal graph visualization (default: True)
    
        Returns:
            Year-over-year comparison with totals and percentage changes
        """
        try:
            # Get all transactions since the start year
            since_date = f"{start_year}-01-01"
            end_year = start_year + num_years - 1
            until_date = f"{end_year}-12-31"
    
            url = f"{self.api_base_url}/budgets/{budget_id}/transactions"
            params = {"since_date": since_date}
    
            result = await self._make_request_with_retry("get", url, params=params)
    
            txn_data = result["data"]["transactions"]
    
            # Aggregate by year
            yearly_totals = {}
            for year in range(start_year, end_year + 1):
                yearly_totals[str(year)] = 0
    
            for txn in txn_data:
                # Filter by category and date range
                if txn.get("category_id") != category_id:
                    continue
                if txn["date"] > until_date:
                    continue
    
                year = txn["date"][:4]
                if year in yearly_totals:
                    amount = txn["amount"] / 1000 if txn.get("amount") else 0
                    yearly_totals[year] += amount
    
            # Calculate year-over-year changes
            comparisons = []
            years_sorted = sorted(yearly_totals.keys())
    
            for i, year in enumerate(years_sorted):
                year_data = {
                    "year": year,
                    "total_spent": yearly_totals[year],
                }
    
                if i > 0:
                    prev_year = years_sorted[i - 1]
                    prev_total = yearly_totals[prev_year]
                    change = yearly_totals[year] - prev_total
    
                    if prev_total != 0:
                        percent_change = (change / abs(prev_total)) * 100
                    else:
                        percent_change = 0 if change == 0 else float("inf")
    
                    year_data["change_from_previous"] = change
                    year_data["percent_change"] = percent_change
    
                comparisons.append(year_data)
    
            # Calculate overall statistics
            totals = [yearly_totals[year] for year in years_sorted]
            average_per_year = sum(totals) / len(totals) if totals else 0
    
            result_data = {
                "category_id": category_id,
                "years": f"{start_year}-{end_year}",
                "average_per_year": average_per_year,
                "yearly_comparison": comparisons,
            }
    
            # Add graph if requested
            if include_graph and yearly_totals:
                graph_data = [(year, yearly_totals[year]) for year in years_sorted]
                result_data["graph"] = self._generate_graph(
                    graph_data, f"Year-over-Year Comparison: {start_year}-{end_year}"
                )
    
            return result_data
        except Exception as e:
            raise Exception(f"Failed to compare spending by year: {e}") from e
  • Helper method to generate ASCII terminal bar graph using termgraph library. Captures stdout to return graph as string. Used in compare_spending_by_year for visual year-over-year comparison.
    def _generate_graph(self, data: list[tuple], title: str = "") -> str:
        """Generate a terminal graph using termgraph.
    
        Args:
            data: List of (label, value) tuples
            title: Graph title
    
        Returns:
            String containing the terminal graph
        """
        if not data:
            return ""
    
        # Capture termgraph output
        old_stdout = sys.stdout
        sys.stdout = StringIO()
    
        try:
            # Prepare data for termgraph
            labels = [label for label, _ in data]
            values = [[abs(value)] for _, value in data]
    
            # Configure termgraph
            args = {
                "stacked": False,
                "width": 50,
                "format": "{:.2f}",
                "suffix": "",
                "no_labels": False,
                "color": None,
                "vertical": False,
                "different_scale": False,
                "calendar": False,
                "start_dt": None,
                "custom_tick": "",
                "delim": "",
                "verbose": False,
                "label_before": False,
                "histogram": False,
                "no_values": False,
            }
    
            # Print title
            if title:
                print(f"\n{title}")
                print("=" * len(title))
    
            # Generate graph
            tg.chart(colors=[], data=values, args=args, labels=labels)
    
            # Get the output
            output = sys.stdout.getvalue()
            return output
    
        finally:
            sys.stdout = old_stdout
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool returns a JSON string with comparison data and an optional graph, which adds some behavioral context. However, it lacks details on permissions, rate limits, data freshness, or side effects (e.g., whether it's read-only or has any impact). For a tool with no annotations, this is insufficient to fully understand its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with a clear purpose sentence, followed by an 'Args' section with bullet-like explanations, and ends with a 'Returns' section. Each sentence adds value, with no redundant information. It could be slightly more concise by integrating the default values more seamlessly, but overall it's efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no annotations, but with an output schema), the description is fairly complete. It explains the purpose, parameters, and return format. The output schema likely covers return values in detail, so the description doesn't need to elaborate further. However, it lacks behavioral context like error handling or data constraints, which would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics for all parameters: 'budget_id' (with 'last-used' default note), 'category_id' (to analyze), 'start_year' (with example), 'num_years' (default and purpose), and 'include_graph' (default and effect). This goes beyond the schema's basic titles, providing context and usage hints that aid parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare spending for a category across multiple years.' It specifies the verb ('compare') and resource ('spending for a category'), but doesn't explicitly differentiate from sibling tools like 'get_category_spending_summary' or 'get_category', which might offer related functionality. The purpose is specific but lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions what it does but doesn't specify contexts, prerequisites, or exclusions. For example, it doesn't clarify if this is for historical analysis versus real-time data, or how it differs from 'get_category_spending_summary'. This leaves the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dgalarza/ynab-mcp-dgalarza'

If you have feedback or need assistance with the MCP directory API, please join our Discord server