Skip to main content
Glama

group_by_aggregate

Group data and compute aggregations like sum, mean, and count to analyze patterns across segments for insights and reporting.

Instructions

Group data and compute aggregations for analytical insights.

Performs GROUP BY operations with multiple aggregation functions per column. Essential for segmentation analysis and understanding patterns across different data groups.

Returns: Grouped aggregation results with statistics per group

Aggregation Functions: 📊 count, mean, median, sum, min, max 📈 std, var (statistical measures) 🎯 first, last (positional) 📋 nunique (unique count)

Examples: # Sales analysis by region result = await group_by_aggregate(ctx, group_by=["region"], aggregations={"sales": ["sum", "mean", "count"]})

# Multi-dimensional grouping
result = await group_by_aggregate(ctx,
                                group_by=["category", "region"],
                                aggregations={
                                    "price": ["mean", "std"],
                                    "quantity": ["sum", "count"]
                                })

AI Workflow Integration: 1. Segmentation analysis and market research 2. Feature engineering for categorical interactions 3. Data summarization for reporting and insights 4. Understanding group-based patterns and trends

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
group_byYesList of columns to group by for segmentation analysis
aggregationsYesDict mapping column names to list of aggregation functions

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
groupsYes
successNoWhether operation completed successfully
total_groupsYes
group_by_columnsYes
aggregated_columnsYes

Implementation Reference

  • The main handler function for the 'group_by_aggregate' tool. It retrieves the session dataframe, validates columns, performs groupby operation, computes statistics on numeric columns for each group, and returns a GroupAggregateResult.
    async def group_by_aggregate(
        ctx: Annotated[Context, Field(description="FastMCP context for session access")],
        group_by: Annotated[
            list[str],
            Field(description="List of columns to group by for segmentation analysis"),
        ],
        aggregations: Annotated[
            dict[str, list[str]],
            Field(description="Dict mapping column names to list of aggregation functions"),
        ],
    ) -> GroupAggregateResult:
        """Group data and compute aggregations for analytical insights.
    
        Performs GROUP BY operations with multiple aggregation functions
        per column. Essential for segmentation analysis and understanding patterns
        across different data groups.
    
        Returns:
            Grouped aggregation results with statistics per group
    
        Aggregation Functions:
            📊 count, mean, median, sum, min, max
            📈 std, var (statistical measures)
            🎯 first, last (positional)
            📋 nunique (unique count)
    
        Examples:
            # Sales analysis by region
            result = await group_by_aggregate(ctx,
                                            group_by=["region"],
                                            aggregations={"sales": ["sum", "mean", "count"]})
    
            # Multi-dimensional grouping
            result = await group_by_aggregate(ctx,
                                            group_by=["category", "region"],
                                            aggregations={
                                                "price": ["mean", "std"],
                                                "quantity": ["sum", "count"]
                                            })
    
        AI Workflow Integration:
            1. Segmentation analysis and market research
            2. Feature engineering for categorical interactions
            3. Data summarization for reporting and insights
            4. Understanding group-based patterns and trends
    
        """
        # Get session_id from FastMCP context
        session_id = ctx.session_id
        _session, df = get_session_data(session_id)
    
        # Validate group by columns
        missing_cols = [col for col in group_by if col not in df.columns]
        if missing_cols:
            raise ColumnNotFoundError(missing_cols[0], df.columns.tolist())
    
        # Validate aggregation columns
        agg_cols = list(aggregations.keys())
        missing_agg_cols = [col for col in agg_cols if col not in df.columns]
        if missing_agg_cols:
            raise ColumnNotFoundError(missing_agg_cols[0], df.columns.tolist())
    
        # Perform groupby to get group statistics
        grouped = df.groupby(group_by)
    
        # Create GroupStatistics for each group
        group_stats = {}
    
        for group_name, group_data in grouped:
            # Convert group name to string for dict key
            if isinstance(group_name, tuple):
                group_key = "_".join(str(x) for x in group_name)
            else:
                group_key = str(group_name)
    
            # Calculate basic statistics for the group
            # Focus on first numeric column for statistics, or count for non-numeric
            numeric_cols = group_data.select_dtypes(include=[np.number]).columns
    
            if len(numeric_cols) > 0:
                # Use first numeric column for statistics
                first_numeric = group_data[numeric_cols[0]]
                group_stats[group_key] = GroupStatistics(
                    count=len(group_data),
                    mean=float(first_numeric.mean()) if not pd.isna(first_numeric.mean()) else None,
                    sum=float(first_numeric.sum()) if not pd.isna(first_numeric.sum()) else None,
                    min=float(first_numeric.min()) if not pd.isna(first_numeric.min()) else None,
                    max=float(first_numeric.max()) if not pd.isna(first_numeric.max()) else None,
                    std=float(first_numeric.std()) if not pd.isna(first_numeric.std()) else None,
                )
            else:
                # No numeric columns, just provide count
                group_stats[group_key] = GroupStatistics(count=len(group_data))
    
        return GroupAggregateResult(
            groups=group_stats,
            group_by_columns=group_by,
            aggregated_columns=agg_cols,
            total_groups=len(group_stats),
        )
  • Pydantic model defining the output schema for the group_by_aggregate tool response.
    class GroupAggregateResult(BaseToolResponse):
        """Response model for group aggregation operations."""
    
        groups: dict[str, GroupStatistics]
        group_by_columns: list[str]
        aggregated_columns: list[str]
        total_groups: int
  • Pydantic model used within GroupAggregateResult for per-group statistics.
    class GroupStatistics(BaseModel):
        """Statistics for a grouped data segment."""
    
        count: int = Field(description="Number of records in this group")
        mean: float | None = Field(default=None, description="Mean value for numeric columns")
        sum: float | None = Field(default=None, description="Sum of values for numeric columns")
        min: float | None = Field(default=None, description="Minimum value in the group")
        max: float | None = Field(default=None, description="Maximum value in the group")
        std: float | None = Field(default=None, description="Standard deviation for numeric columns")
  • Registration of the group_by_aggregate handler as an MCP tool on the discovery_server.
    discovery_server.tool(name="group_by_aggregate")(group_by_aggregate)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by detailing the aggregation functions available, providing example usage patterns, and explaining the return format ('grouped aggregation results with statistics per group'). It lacks explicit mention of performance characteristics or error handling, but covers core operational behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections but is somewhat verbose. Sentences like 'Essential for segmentation analysis and understanding patterns across different data groups' could be more concise. The examples and AI workflow integration add value but contribute to length, making it less front-loaded than ideal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (analytical grouping with multiple functions), no annotations, and the presence of an output schema, the description is complete. It covers purpose, usage, behavior, parameters through examples, and return values, providing sufficient context for an AI agent to understand and invoke the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds some value by listing aggregation functions and providing examples that illustrate how parameters work together, but does not significantly enhance the schema's documentation of group_by and aggregations parameters beyond what is already covered.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('group data and compute aggregations') and distinguishes it from siblings by focusing on analytical segmentation rather than data manipulation or basic statistics. It explicitly mentions 'GROUP BY operations' and 'analytical insights,' differentiating it from tools like get_column_statistics or get_value_counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('essential for segmentation analysis and understanding patterns across different data groups') and includes AI workflow integration examples. However, it does not explicitly state when NOT to use it or name specific alternatives among sibling tools, such as get_value_counts for simpler grouping.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonpspri/databeak'

If you have feedback or need assistance with the MCP directory API, please join our Discord server