Skip to main content
Glama

analyze_data_tool

Analyze Strava activity data by executing custom Python code to calculate metrics, summarize workouts, and extract insights from athlete statistics.

Instructions

Execute Python code to analyze Strava data safely using Monty.

Args: code: Python code to execute. The data is available as a variable named 'data'. Example: "sum(activity['distance'] for activity in data) / 1000" data: The data to analyze (e.g. list of activities, athlete stats). Can be passed as a JSON object (list/dict) or a JSON string.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYes
dataYes

Implementation Reference

  • The main MCP tool handler that executes Python code to analyze Strava data. Handles JSON string parsing, delegates to analyze_data helper, and provides error handling.
    @mcp.tool()
    def analyze_data_tool(code: str, data: Any) -> Any:
        """
        Execute Python code to analyze Strava data safely using Monty.
    
        Args:
            code: Python code to execute. The data is available as a variable named 'data'.
                  Example: "sum(activity['distance'] for activity in data) / 1000"
            data: The data to analyze (e.g. list of activities, athlete stats).
                  Can be passed as a JSON object (list/dict) or a JSON string.
        """
        # If data is a string, try to parse it as JSON
        if isinstance(data, str):
            try:
                data = json.loads(data)
            except json.JSONDecodeError:
                pass  # Treat as raw string if not valid JSON
    
        try:
            result = analyze_data(code, data)
            return result
        except Exception as e:
            return f"Error executing code: {str(e)}"
  • Core implementation using pydantic_monty to safely execute Python code. Injects data as a variable and returns the execution result with error handling.
    def analyze_data(code: str, data: Any) -> Any:
        """
        Executes Python code safely using Monty, passing 'data' as a variable.
    
        Args:
            code: The Python code snippet to execute.
            data: The data structure (dict, list, etc.) to inject as the 'data' variable.
        """
    
        # Always inject data as 'data' variable
        inputs = {"data": data}
        input_names = ["data"]
    
        try:
            # Initialize Monty with the code and expected input variables
            # Using strict limits by default for safety
            m = pydantic_monty.Monty(code, inputs=input_names)
    
            # Execute the code
            result = m.run(inputs=inputs)
            return result
    
        except Exception as e:
            # Raise a clear error message that the MCP client can display
            raise RuntimeError(f"Analysis failed: {str(e)}") from e
  • server.py:142-144 (registration)
    Registration of analyze_data_tool with FastMCP using the @mcp.tool() decorator.
    @mcp.tool()
    def analyze_data_tool(code: str, data: Any) -> Any:
        """
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'safely using Monty', which hints at security or sandboxing, but does not detail specific behavioral traits such as execution limits, error handling, or output format. It adds some context but leaves gaps in transparency for a code execution tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose followed by detailed parameter explanations. Every sentence earns its place by adding essential information without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (code execution with data analysis), no annotations, and no output schema, the description is incomplete. It covers parameters well but lacks details on behavioral aspects like safety mechanisms, execution environment, or return values. For a tool with this level of complexity, more context is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'code' is Python code with an example and that 'data' is the data to analyze, specifying it can be a JSON object or string. This fully compensates for the schema's lack of documentation, providing clear semantics for both parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Execute Python code to analyze Strava data') and identifies the resource ('Strava data'). It distinguishes from sibling tools like 'get_activity_details_tool' or 'list_activities_tool' by focusing on custom analysis rather than data retrieval, making its role explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by specifying that it analyzes 'Strava data safely using Monty' and includes an example, but it does not explicitly state when to use this tool versus alternatives like 'get_athlete_stats_tool' or 'search_activities_tool'. It implies usage for custom analysis but lacks explicit exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saxenanurag/strava-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server