Skip to main content
Glama

get_running_trends

Analyze running performance trends over time to track progress and identify patterns in your training data.

Instructions

Get running performance trends over a specified period. Response size optimized for Claude context window.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
months_backNoNumber of months to analyze (default 3, reduced from 6 to avoid context window issues)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Response size optimized for Claude context window,' which adds context about output constraints, but fails to describe other key traits: whether this is a read-only operation, potential rate limits, authentication needs, or what the response format includes (e.g., trends as graphs or data points). For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that are front-loaded: the first states the purpose, and the second adds a technical constraint. There is no unnecessary verbiage, and each sentence serves a purpose. However, it could be slightly more structured by explicitly linking the period to the parameter, but overall, it's efficient and well-sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is partially complete. It covers the purpose and a technical constraint but lacks details on usage guidelines, behavioral traits, and output specifics. Without annotations or an output schema, the agent is left guessing about the response format and operational context, making this description adequate but with clear gaps for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not explicitly mention the 'months_back' parameter or add meaning beyond what the input schema provides. Since schema description coverage is 100%, the schema already documents the parameter with a description and default value. The description's mention of 'specified period' aligns with the parameter but doesn't offer additional semantics, so the baseline score of 3 is appropriate given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get running performance trends over a specified period.' It specifies the verb ('Get') and resource ('running performance trends'), and distinguishes it from siblings like 'get_weekly_running_summary' or 'get_advanced_running_metrics' by focusing on trends over time. However, it doesn't explicitly differentiate from all siblings, such as 'get_recent_running_activities', which might also involve time-based data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'Response size optimized for Claude context window,' which hints at a technical constraint but doesn't specify use cases, prerequisites, or comparisons to siblings like 'get_weekly_running_summary' for shorter-term data or 'analyze_training_load' for different metrics. Without explicit when/when-not instructions, the agent lacks clear direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/leewnsdud/garmin-connect-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server