Skip to main content
Glama

fairness_metrics

Compute fairness metrics from prediction data to assess bias and compliance. Use optional ground truth for equalized odds and calibration metrics.

Instructions

Calculate fairness metrics from prediction data. Input format: comma-separated values with group labels.

Provide predictions as 'group:prediction' pairs separated by commas. Example: "male:1,female:0,male:1,female:1,male:0,female:0"

If ground_truth is provided, use same format for actual outcomes to compute equalized odds and calibration metrics.

Args: predictions: Comma-separated group:prediction pairs (e.g. "male:1,female:0,male:1"). ground_truth: Optional comma-separated group:actual pairs for outcome-based metrics. api_key: Optional MEOK API key for pro tier.

Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.

When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation.

When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
predictionsYes
ground_truthNo
api_keyNo

Implementation Reference

  • server.py:549-554 (registration)
    The 'fairness_metrics' tool is registered as an MCP tool via the @mcp.tool() decorator on FastMCP instance.
    @mcp.tool()
    def fairness_metrics(
        predictions: str,
        ground_truth: str = "",
        api_key: str = "",
    ) -> dict:
  • Main handler function for fairness_metrics. Parses comma-separated group:prediction pairs, computes demographic statistics per group, calculates disparate impact ratio (4/5ths rule), statistical parity difference, and optionally equalized odds if ground_truth is provided. Returns a fairness assessment dict with pass/fail verdict and EU AI Act references.
    def fairness_metrics(
        predictions: str,
        ground_truth: str = "",
        api_key: str = "",
    ) -> dict:
        """Calculate fairness metrics from prediction data. Input format: comma-separated values with group labels.
    
        Provide predictions as 'group:prediction' pairs separated by commas.
        Example: "male:1,female:0,male:1,female:1,male:0,female:0"
    
        If ground_truth is provided, use same format for actual outcomes to compute
        equalized odds and calibration metrics.
    
        Args:
            predictions: Comma-separated group:prediction pairs (e.g. "male:1,female:0,male:1").
            ground_truth: Optional comma-separated group:actual pairs for outcome-based metrics.
            api_key: Optional MEOK API key for pro tier.
    
        Behavior:
            This tool is read-only and stateless — it produces analysis output
            without modifying any external systems, databases, or files.
            Safe to call repeatedly with identical inputs (idempotent).
            Free tier: 10/day rate limit. Pro tier: unlimited.
            No authentication required for basic usage.
    
        When to use:
            Use this tool when you need to assess, audit, or verify compliance
            requirements. Ideal for gap analysis, readiness checks, and generating
            compliance documentation.
    
        When NOT to use:
            Do not use as a substitute for qualified legal counsel. This tool
            provides technical compliance guidance, not legal advice.
        """
        allowed, msg, tier = check_access(api_key)
        if not allowed:
            return {"error": msg, "upgrade_url": "https://meok.ai/pricing"}
        limit_err = _check_rate_limit("fairness_metrics", tier)
        if limit_err:
            return {"error": "rate_limited", "message": limit_err}
    
        # Parse predictions
        group_preds = defaultdict(list)  # type: Dict[str, List[int]]
        try:
            for pair in predictions.split(","):
                pair = pair.strip()
                if ":" not in pair:
                    continue
                group, pred = pair.rsplit(":", 1)
                group_preds[group.strip().lower()].append(int(pred.strip()))
        except (ValueError, IndexError):
            return {
                "error": "invalid_format",
                "message": "Use format: group:prediction (e.g. 'male:1,female:0'). Predictions must be 0 or 1.",
            }
    
        if len(group_preds) < 2:
            return {
                "error": "insufficient_groups",
                "message": "Need at least 2 groups for fairness comparison. Found: {}".format(list(group_preds.keys())),
            }
    
        # Calculate selection rates per group
        group_stats = {}  # type: Dict[str, Dict[str, object]]
        for group, preds in group_preds.items():
            positive_rate = sum(preds) / len(preds) if preds else 0
            group_stats[group] = {
                "total": len(preds),
                "positive": sum(preds),
                "negative": len(preds) - sum(preds),
                "positive_rate": round(positive_rate, 4),
            }
    
        # Disparate impact (4/5ths rule)
        rates = [(g, s["positive_rate"]) for g, s in group_stats.items()]
        max_rate = max(r[1] for r in rates) if rates else 1
        min_rate = min(r[1] for r in rates) if rates else 0
    
        disparate_impact_ratio = round(min_rate / max_rate, 4) if max_rate > 0 else 0.0
        passes_four_fifths = disparate_impact_ratio >= 0.8
    
        # Statistical parity difference
        stat_parity_diff = round(max_rate - min_rate, 4)
    
        # Parse ground truth if provided
        equalized_odds = None
        if ground_truth:
            group_actuals = defaultdict(list)  # type: Dict[str, List[int]]
            try:
                for pair in ground_truth.split(","):
                    pair = pair.strip()
                    if ":" not in pair:
                        continue
                    group, actual = pair.rsplit(":", 1)
                    group_actuals[group.strip().lower()].append(int(actual.strip()))
            except (ValueError, IndexError):
                group_actuals = defaultdict(list)
    
            if group_actuals and len(group_actuals) >= 2:
                # Calculate TPR and FPR per group
                eo_stats = {}  # type: Dict[str, Dict[str, float]]
                for group in group_preds:
                    if group not in group_actuals:
                        continue
                    preds = group_preds[group]
                    actuals = group_actuals[group]
                    n = min(len(preds), len(actuals))
                    tp = sum(1 for i in range(n) if preds[i] == 1 and actuals[i] == 1)
                    fp = sum(1 for i in range(n) if preds[i] == 1 and actuals[i] == 0)
                    fn = sum(1 for i in range(n) if preds[i] == 0 and actuals[i] == 1)
                    tn = sum(1 for i in range(n) if preds[i] == 0 and actuals[i] == 0)
    
                    tpr = tp / (tp + fn) if (tp + fn) > 0 else 0.0
                    fpr = fp / (fp + tn) if (fp + tn) > 0 else 0.0
    
                    eo_stats[group] = {
                        "true_positive_rate": round(tpr, 4),
                        "false_positive_rate": round(fpr, 4),
                        "accuracy": round((tp + tn) / n, 4) if n > 0 else 0.0,
                    }
    
                if eo_stats:
                    tprs = [s["true_positive_rate"] for s in eo_stats.values()]
                    fprs = [s["false_positive_rate"] for s in eo_stats.values()]
                    equalized_odds = {
                        "group_metrics": eo_stats,
                        "tpr_gap": round(max(tprs) - min(tprs), 4),
                        "fpr_gap": round(max(fprs) - min(fprs), 4),
                        "equalized_odds_satisfied": (max(tprs) - min(tprs)) < 0.1 and (max(fprs) - min(fprs)) < 0.1,
                    }
    
        # Overall fairness assessment
        issues = []  # type: List[str]
        if not passes_four_fifths:
            issues.append(
                "FAILS 4/5ths rule (disparate impact ratio {:.2f} < 0.80) -- "
                "prima facie evidence of discrimination under US EEOC guidelines".format(disparate_impact_ratio)
            )
        if stat_parity_diff > 0.1:
            issues.append(
                "Statistical parity gap {:.2f} exceeds 0.10 threshold -- "
                "groups receive positive outcomes at significantly different rates".format(stat_parity_diff)
            )
        if equalized_odds and not equalized_odds["equalized_odds_satisfied"]:
            issues.append(
                "Equalized odds NOT satisfied -- error rates differ across groups"
            )
    
        return {
            "group_statistics": group_stats,
            "disparate_impact": {
                "ratio": disparate_impact_ratio,
                "passes_four_fifths_rule": passes_four_fifths,
                "highest_rate_group": max(rates, key=lambda x: x[1])[0] if rates else "N/A",
                "lowest_rate_group": min(rates, key=lambda x: x[1])[0] if rates else "N/A",
            },
            "statistical_parity": {
                "difference": stat_parity_diff,
                "acceptable": stat_parity_diff < 0.1,
            },
            "equalized_odds": equalized_odds,
            "fairness_issues": issues if issues else ["No significant fairness issues detected"],
            "overall_assessment": "FAIL -- fairness issues detected" if issues else "PASS -- no significant fairness issues",
            "eu_ai_act_note": (
                "Article 10(2)(f) requires examination of training data for biases. "
                "Article 10(3) requires data to be representative. "
                "Document these metrics in Annex IV Section 4."
            ),
            "next_step": "Use mitigation_recommendations for specific remediation strategies",
            "meok_labs": "https://meok.ai",
        }
  • Rate limiting helper called by fairness_metrics to enforce free tier (10/day) vs pro (unlimited) limits.
    def _check_rate_limit(caller="anonymous", tier="free"):
        # type: (str, str) -> Optional[str]
        """Returns error string if rate-limited, else None."""
        if tier == "pro":
            return None
        now = datetime.now()
        cutoff = now - timedelta(days=1)
        _usage[caller] = [t for t in _usage[caller] if t > cutoff]
        if len(_usage[caller]) >= FREE_DAILY_LIMIT:
            return (
                "Free tier limit reached ({}/day). "
                "Upgrade to MEOK AI Labs Pro for unlimited access at $29/mo: "
                "https://meok.ai/mcp/bias-detection/pro".format(FREE_DAILY_LIMIT)
            )
        _usage[caller].append(now)
        return None
  • Access check helper used by fairness_metrics to validate API key and determine tier (free/pro).
    def check_access(api_key=""):
        # type: (str) -> Tuple[bool, str, str]
        """Unified access check -- works with or without shared auth engine."""
        return _shared_check_access(api_key)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully carries the burden. It states the tool is read-only, stateless, idempotent, includes rate limits (10/day free, unlimited pro), and clarifies authentication needs (optional api_key, no auth required for basic usage).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections but somewhat verbose; it repeats the comma-separated format twice. Every sentence adds value, but could be tightened.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and the description does not specify the return format or structure of the fairness metrics (e.g., list, dictionary, scores). This leaves the agent guessing about what the tool actually returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description compensates well by explaining the comma-separated 'group:prediction' format for predictions and ground_truth, and the purpose of api_key for pro tier. Could add accepted value ranges or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Calculate fairness metrics from prediction data' and provides specific metrics (equalized odds, calibration), distinguishing it from sibling tools like detect_bias or regulatory_check by focusing on metric calculation rather than detection or checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'When to use' and 'When NOT to use' sections provide context for compliance audits and gap analysis, and warn against using as legal advice. However, no direct comparison to sibling tools is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CSOAI-ORG/bias-detection-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server