Skip to main content
Glama

get_productivity_trend

Analyze productivity trends over time by retrieving daily productivity pulse data for the last N days, helping identify patterns and calculate averages to optimize time usage.

Instructions

Get productivity pulse trend for the last N days.

Args: days: Number of days to look back (default: 7, max 14)

Shows the daily productivity pulse with visual bars and calculates averages. Useful for identifying patterns and trends over time.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
daysNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Primary handler implementation for the get_productivity_trend tool. Decorated with @mcp.tool() for FastMCP registration. Fetches daily summaries via RescueTimeClient, generates visual trend bars using productivity_bar helper, computes averages, and formats output string.
    @mcp.tool()
    async def get_productivity_trend(days: int = 7) -> str:
        """Get productivity pulse trend for the last N days.
    
        Args:
            days: Number of days to look back (default: 7, max 14)
    
        Shows the daily productivity pulse with visual bars and calculates averages.
        Useful for identifying patterns and trends over time.
        """
        try:
            client = RescueTimeClient()
            summaries = await client.get_daily_summary()
    
            if not summaries:
                return "No productivity data available."
    
            # Limit to requested days
            summaries = summaries[:min(days, len(summaries))]
    
            lines = [f"Productivity Trend (last {len(summaries)} days):", ""]
    
            for day in summaries:
                pulse = day.productivity_pulse
                bar = productivity_bar(pulse)
                date_short = day.date[5:]  # MM-DD
                lines.append(f"{date_short}: {bar} {pulse:.0f} ({day.total_duration_formatted})")
    
            # Calculate averages
            if summaries:
                avg_pulse = sum(d.productivity_pulse for d in summaries) / len(summaries)
                avg_productive = sum(d.all_productive_percentage for d in summaries) / len(summaries)
                total_hours = sum(d.total_hours for d in summaries)
                lines.append("")
                lines.append(f"Average: {avg_pulse:.0f} pulse, {avg_productive:.0f}% productive")
                lines.append(f"Total logged: {format_hours_minutes(total_hours)}")
    
            return "\n".join(lines)
    
        except RescueTimeAuthError as e:
            return f"Authentication error: {e}"
        except RescueTimeAPIError as e:
            return f"API error: {e}"
  • Pydantic model for DailySummary data structure, parsed from RescueTime API and used directly in the tool handler for trend computation.
    class DailySummary(BaseModel):
        """Daily summary from the Daily Summary Feed API."""
    
        date: str
        productivity_pulse: float
        very_productive_percentage: float
        productive_percentage: float
        neutral_percentage: float
        distracting_percentage: float
        very_distracting_percentage: float
        all_productive_percentage: float
        all_distracting_percentage: float
        total_hours: float
        very_productive_hours: float
        productive_hours: float
        neutral_hours: float
        distracting_hours: float
        very_distracting_hours: float
        all_productive_hours: float
        all_distracting_hours: float
        total_duration_formatted: str
        very_productive_duration_formatted: str
        productive_duration_formatted: str
        neutral_duration_formatted: str
        distracting_duration_formatted: str
        very_distracting_duration_formatted: str
        all_productive_duration_formatted: str
        all_distracting_duration_formatted: str
  • Helper function to generate visual progress bars for productivity pulse scores, used in the trend display.
    def productivity_bar(score: float, width: int = 10) -> str:
        """Create a visual bar for productivity score (0-100)."""
        filled = int(score * width / 100)
        return "\u2588" * filled + "\u2591" * (width - filled)
  • RescueTimeClient.get_daily_summary method, the data source providing DailySummary list for the tool's trend analysis.
    async def get_daily_summary(self) -> list[DailySummary]:
        """Get daily summary feed (last 14 days of daily rollups).
    
        Returns productivity pulse, time by productivity level, and category breakdowns.
        """
        data = await self._request("daily_summary_feed")
    
        if not data:
            return []
    
        return [DailySummary.model_validate(day) for day in data]
  • Formatting helpers for durations used in average total logged time output.
    def format_hours_minutes(hours: float) -> str:
        """Format hours as 'Xh Ym'."""
        h = int(hours)
        m = int((hours - h) * 60)
        if h > 0:
            return f"{h}h {m}m"
        return f"{m}m"
    
    
    def format_duration(seconds: int) -> str:
        """Format seconds as 'Xh Ym'."""
        return format_hours_minutes(seconds / 3600)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool 'Shows the daily productivity pulse with visual bars and calculates averages,' adding behavioral context about output format and calculations. However, it lacks details on permissions, rate limits, or data sources, which are important for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose, followed by parameter details and usage context. Each sentence adds value: the first defines the tool, the second explains the parameter, and the third describes output and utility. There's minimal waste, though it could be slightly more structured with bullet points.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter, no annotations, but an output schema exists, the description is fairly complete. It covers the purpose, parameter semantics, and output behavior ('Shows... visual bars and calculates averages'). The output schema likely handles return values, so the description doesn't need to detail them. However, it could improve by addressing sibling tool differentiation or authentication needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains the 'days' parameter as 'Number of days to look back' with a default and max value, clarifying its purpose and constraints. Since there's only one parameter, this compensates well for the schema gap, though it could mention data types or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get productivity pulse trend for the last N days.' It specifies the verb ('Get') and resource ('productivity pulse trend') with a temporal scope ('last N days'). However, it doesn't explicitly differentiate from sibling tools like 'get_hourly_productivity' or 'get_today_summary' beyond mentioning 'daily' productivity pulse.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's 'Useful for identifying patterns and trends over time,' which suggests when to use this tool. However, it doesn't provide explicit guidance on when to choose this over alternatives like 'get_today_summary' or 'get_hourly_productivity,' nor does it mention any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/JasonBates/rescuetime-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server