Skip to main content
Glama
santoshray02

CSV Editor

by santoshray02

get_column_statistics

Compute detailed statistics for a specific column in your CSV data.

Instructions

Get detailed statistics for a specific column.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
session_idYes
columnYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Import of get_column_statistics from the analytics module, aliased as _get_column_statistics.
    from .tools.analytics import detect_outliers as _detect_outliers
    from .tools.analytics import get_column_statistics as _get_column_statistics
  • MCP tool registration of get_column_statistics via the @mcp.tool decorator. The function is a thin wrapper that delegates to _get_column_statistics.
    @mcp.tool
    async def get_column_statistics(
        session_id: str, column: str, ctx: Context = None
    ) -> dict[str, Any]:
        """Get detailed statistics for a specific column."""
        return await _get_column_statistics(session_id, column, ctx)
  • Core handler for get_column_statistics: retrieves session data, validates the column, computes detailed statistics for numeric columns (mean, median, std, skewness, quantiles, etc.) and categorical/string columns (value counts, frequencies, string length stats), records the operation, and returns results.
    async def get_column_statistics(
        session_id: str, column: str, ctx: Context = None
    ) -> dict[str, Any]:
        """
        Get detailed statistics for a specific column.
    
        Args:
            session_id: Session identifier
            column: Column name to analyze
            ctx: FastMCP context
    
        Returns:
            Dict with detailed column statistics
        """
        try:
            manager = get_session_manager()
            session = manager.get_session(session_id)
    
            if not session or session.df is None:
                return {"success": False, "error": "Invalid session or no data loaded"}
    
            df = session.df
    
            if column not in df.columns:
                return {"success": False, "error": f"Column '{column}' not found"}
    
            col_data = df[column]
            result = {
                "column": column,
                "dtype": str(col_data.dtype),
                "total_count": len(col_data),
                "null_count": int(col_data.isna().sum()),
                "null_percentage": round(col_data.isna().sum() / len(col_data) * 100, 2),
                "unique_count": int(col_data.nunique()),
                "unique_percentage": round(col_data.nunique() / len(col_data) * 100, 2),
            }
    
            # Numeric column statistics
            if pd.api.types.is_numeric_dtype(col_data):
                non_null = col_data.dropna()
                result.update(
                    {
                        "type": "numeric",
                        "mean": float(non_null.mean()),
                        "median": float(non_null.median()),
                        "mode": float(non_null.mode()[0]) if len(non_null.mode()) > 0 else None,
                        "std": float(non_null.std()),
                        "variance": float(non_null.var()),
                        "min": float(non_null.min()),
                        "max": float(non_null.max()),
                        "range": float(non_null.max() - non_null.min()),
                        "sum": float(non_null.sum()),
                        "skewness": float(non_null.skew()),
                        "kurtosis": float(non_null.kurt()),
                        "25%": float(non_null.quantile(0.25)),
                        "50%": float(non_null.quantile(0.50)),
                        "75%": float(non_null.quantile(0.75)),
                        "iqr": float(non_null.quantile(0.75) - non_null.quantile(0.25)),
                        "zero_count": int((col_data == 0).sum()),
                        "positive_count": int((col_data > 0).sum()),
                        "negative_count": int((col_data < 0).sum()),
                    }
                )
    
            # Categorical column statistics
            else:
                value_counts = col_data.value_counts()
                top_values = value_counts.head(10).to_dict()
    
                result.update(
                    {
                        "type": "categorical",
                        "most_frequent": str(value_counts.index[0]) if len(value_counts) > 0 else None,
                        "most_frequent_count": (
                            int(value_counts.iloc[0]) if len(value_counts) > 0 else 0
                        ),
                        "top_10_values": {str(k): int(v) for k, v in top_values.items()},
                    }
                )
    
                # String-specific stats
                if col_data.dtype == "object":
                    str_data = col_data.dropna().astype(str)
                    if len(str_data) > 0:
                        str_lengths = str_data.str.len()
                        result["string_stats"] = {
                            "min_length": int(str_lengths.min()),
                            "max_length": int(str_lengths.max()),
                            "mean_length": round(str_lengths.mean(), 2),
                            "empty_string_count": int((str_data == "").sum()),
                        }
    
            session.record_operation(
                OperationType.ANALYZE, {"type": "column_statistics", "column": column}
            )
    
            return {"success": True, "statistics": result}
    
        except Exception as e:
            logger.error(f"Error getting column statistics: {e!s}")
            return {"success": False, "error": str(e)}
  • The typed signature for get_column_statistics defines the input schema: session_id (str), column (str), and ctx (optional Context).
    async def get_column_statistics(
        session_id: str, column: str, ctx: Context = None
    ) -> dict[str, Any]:
  • Imports for the analytics module: pandas, numpy, FastMCP Context, session manager, and OperationType enum used by get_column_statistics.
    """Analytics tools for CSV data analysis."""
    
    import logging
    from typing import Any
    
    import numpy as np
    import pandas as pd
    from fastmcp import Context
    
    from ..models.csv_session import get_session_manager
    from ..models.data_models import OperationType
    
    logger = logging.getLogger(__name__)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden. It merely states 'get detailed statistics' without specifying what statistics are included (e.g., count, mean, std) or any behavioral traits like side effects or permissions. This is insufficient for a data retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence), but this brevity comes at the cost of missing critical details. It is not optimally concise; it sacrifices informativeness for short length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 params, output schema exists) and many siblings, the description is too sparse. It fails to hint at output content or differentiate from similar tools like 'get_statistics' or 'get_value_counts', leaving the agent underinformed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. The description adds no meaning beyond the parameter names and types. It does not explain the expected format of 'column' (e.g., name or index) or the role of 'session_id'. Without elaboration, the parameters remain ambiguous.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves detailed statistics for a specific column. It distinguishes from siblings like 'get_statistics' (likely overall) and 'get_value_counts' (more specific), leaving no ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description lacks context, prerequisites, or exclusion criteria, making it hard for an agent to decide when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/santoshray02/csv-editor'

If you have feedback or need assistance with the MCP directory API, please join our Discord server