Skip to main content
Glama

detect_outliers

Identify data points that deviate significantly from normal patterns in numerical columns using statistical and machine learning methods for data quality assessment and anomaly detection.

Instructions

Detect outliers in numerical columns using various algorithms.

Identifies data points that deviate significantly from the normal pattern using statistical and machine learning methods. Essential for data quality assessment and anomaly detection in analytical workflows.

Returns: Detailed outlier analysis with locations and severity scores

Detection Methods: 📊 Z-Score: Statistical method based on standard deviations 📈 IQR: Interquartile range method (robust to distribution) 🤖 Isolation Forest: ML-based method for high-dimensional data

Examples: # Basic outlier detection outliers = await detect_outliers(ctx, ["price", "quantity"])

# Use IQR method with custom threshold
outliers = await detect_outliers(ctx, ["sales"],
                                method="iqr", threshold=2.5)

AI Workflow Integration: 1. Data quality assessment and cleaning 2. Anomaly detection for fraud/error identification 3. Data preprocessing for machine learning 4. Understanding data distribution characteristics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
columnsNoList of numerical columns to analyze for outliers (None = all numeric)
methodNoDetection algorithm: zscore, iqr, or isolation_forestiqr
thresholdNoSensitivity threshold (higher = less sensitive)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
methodYesDetection method used
successNoWhether operation completed successfully
thresholdYesThreshold value used for detection
outliers_foundYesTotal number of outliers detected
outliers_by_columnYesOutliers grouped by column name

Implementation Reference

  • The core handler function implementing outlier detection using IQR and Z-score methods on numeric columns from session data. Handles column selection, outlier calculation, and returns structured results.
    async def detect_outliers(
        ctx: Annotated[Context, Field(description="FastMCP context for session access")],
        columns: Annotated[
            list[str] | None,
            Field(description="List of numerical columns to analyze for outliers (None = all numeric)"),
        ] = None,
        method: Annotated[
            str,
            Field(description="Detection algorithm: zscore, iqr, or isolation_forest"),
        ] = "iqr",
        threshold: Annotated[
            float,
            Field(description="Sensitivity threshold (higher = less sensitive)"),
        ] = 1.5,
    ) -> OutliersResult:
        """Detect outliers in numerical columns using various algorithms.
    
        Identifies data points that deviate significantly from the normal pattern
        using statistical and machine learning methods. Essential for data quality
        assessment and anomaly detection in analytical workflows.
    
        Returns:
            Detailed outlier analysis with locations and severity scores
    
        Detection Methods:
            📊 Z-Score: Statistical method based on standard deviations
            📈 IQR: Interquartile range method (robust to distribution)
            🤖 Isolation Forest: ML-based method for high-dimensional data
    
        Examples:
            # Basic outlier detection
            outliers = await detect_outliers(ctx, ["price", "quantity"])
    
            # Use IQR method with custom threshold
            outliers = await detect_outliers(ctx, ["sales"],
                                            method="iqr", threshold=2.5)
    
        AI Workflow Integration:
            1. Data quality assessment and cleaning
            2. Anomaly detection for fraud/error identification
            3. Data preprocessing for machine learning
            4. Understanding data distribution characteristics
    
        """
        # Get session_id from FastMCP context
        session_id = ctx.session_id
        _session, df = get_session_data(session_id)
    
        # Select numeric columns
        if columns:
            missing_cols = [col for col in columns if col not in df.columns]
            if missing_cols:
                raise ColumnNotFoundError(missing_cols[0], df.columns.tolist())
            numeric_df = df[columns].select_dtypes(include=[np.number])
        else:
            numeric_df = df.select_dtypes(include=[np.number])
    
        if numeric_df.empty:
            raise InvalidParameterError(
                "columns",  # noqa: EM101
                columns if columns else "auto-detected",
                "at least one numeric column",
            )
    
        outliers_by_column = {}
        total_outliers_count = 0
    
        if method == "iqr":
            for col in numeric_df.columns:
                q1 = numeric_df[col].quantile(0.25)
                q3 = numeric_df[col].quantile(0.75)
                iqr = q3 - q1
    
                lower_bound = q1 - threshold * iqr
                upper_bound = q3 + threshold * iqr
    
                outlier_mask = (numeric_df[col] < lower_bound) | (numeric_df[col] > upper_bound)
                outlier_indices = df.index[outlier_mask]
    
                # Create OutlierInfo objects for each outlier
                outlier_infos = []
                for idx in outlier_indices[:100]:  # Limit to first 100
                    raw_value = df.loc[idx, col]
                    try:
                        value = float(cast("Any", raw_value))
                    except (ValueError, TypeError):
                        continue  # Skip non-numeric values
    
                    # Calculate IQR score (distance from nearest bound relative to IQR)
                    if value < lower_bound:
                        iqr_score = float((lower_bound - value) / iqr) if iqr > 0 else 0.0
                    else:
                        iqr_score = float((value - upper_bound) / iqr) if iqr > 0 else 0.0
    
                    outlier_infos.append(
                        OutlierInfo(row_index=int(idx), value=value, iqr_score=iqr_score),
                    )
    
                outliers_by_column[col] = outlier_infos
                total_outliers_count += len(outlier_indices)
    
        elif method == "zscore":
            for col in numeric_df.columns:
                col_mean = numeric_df[col].mean()
                col_std = numeric_df[col].std()
                z_scores = np.abs((numeric_df[col] - col_mean) / col_std)
                outlier_mask = z_scores > threshold
                outlier_indices = df.index[outlier_mask]
    
                # Create OutlierInfo objects for each outlier
                outlier_infos = []
                for idx in outlier_indices[:100]:  # Limit to first 100
                    raw_value = df.loc[idx, col]
                    try:
                        value = float(cast("Any", raw_value))
                    except (ValueError, TypeError):
                        continue  # Skip non-numeric values
    
                    z_score = float(abs((value - col_mean) / col_std)) if col_std > 0 else 0.0
    
                    outlier_infos.append(
                        OutlierInfo(row_index=int(idx), value=value, z_score=z_score),
                    )
    
                outliers_by_column[col] = outlier_infos
                total_outliers_count += len(outlier_indices)
    
        else:
            raise InvalidParameterError(
                "method",  # noqa: EM101
                method,
                "zscore, iqr, or isolation_forest",
            )
    
        # Map method names to match Pydantic model expectations
        if method == "zscore":
            pydantic_method = "zscore"
        elif method == "iqr":
            pydantic_method = "iqr"
        else:
            pydantic_method = "isolation_forest"
    
        return OutliersResult(
            outliers_found=total_outliers_count,
            outliers_by_column=outliers_by_column,
            method=cast("Literal['zscore', 'iqr', 'isolation_forest']", pydantic_method),
            threshold=threshold,
        )
  • Pydantic response model defining the structure of the detect_outliers tool output.
    class OutliersResult(BaseToolResponse):
        """Response model for outlier detection analysis."""
    
        outliers_found: int = Field(description="Total number of outliers detected")
        outliers_by_column: dict[str, list[OutlierInfo]] = Field(
            description="Outliers grouped by column name",
        )
        method: Literal["zscore", "iqr", "isolation_forest"] = Field(
            description="Detection method used",
        )
        threshold: float = Field(description="Threshold value used for detection")
  • Pydantic model used within OutliersResult for individual outlier details.
    class OutlierInfo(BaseModel):
        """Information about a detected outlier."""
    
        row_index: int = Field(description="Row index where outlier was detected")
        value: float = Field(description="Outlier value found")
        z_score: float | None = Field(default=None, description="Z-score if using z-score method")
        iqr_score: float | None = Field(default=None, description="IQR score if using IQR method")
  • Registration of the detect_outliers function as an MCP tool on the discovery_server.
    discovery_server.tool(name="detect_outliers")(detect_outliers)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well: it explains what the tool returns ('Detailed outlier analysis with locations and severity scores'), describes three detection methods with their characteristics, and provides example usage patterns. It doesn't mention performance, rate limits, or data size constraints, but covers core behavior adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Returns, Detection Methods, Examples, AI Workflow Integration), but it's somewhat verbose. Some content like the emoji-enhanced method descriptions and numbered workflow list could be more concise while maintaining clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (statistical/ML outlier detection), no annotations, but 100% schema coverage and an output schema exists, the description is complete enough. It explains purpose, methods, returns, usage examples, and integration contexts—providing all necessary context for an AI agent to understand and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal parameter semantics beyond the schema—it mentions 'custom threshold' in an example and lists method names in the 'Detection Methods' section, but doesn't provide additional meaning about parameter interactions or effects.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Detect outliers in numerical columns using various algorithms.' It specifies the verb ('detect'), resource ('outliers in numerical columns'), and distinguishes it from siblings like 'find_anomalies' by emphasizing statistical/ML methods for outlier detection rather than general anomaly finding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'AI Workflow Integration' section provides clear context for when to use this tool (data quality assessment, anomaly detection, preprocessing, distribution analysis). However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings (e.g., 'find_anomalies' might overlap).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonpspri/databeak'

If you have feedback or need assistance with the MCP directory API, please join our Discord server