fairness_metrics
Compute fairness metrics from prediction data to assess bias and compliance. Use optional ground truth for equalized odds and calibration metrics.
Instructions
Calculate fairness metrics from prediction data. Input format: comma-separated values with group labels.
Provide predictions as 'group:prediction' pairs separated by commas. Example: "male:1,female:0,male:1,female:1,male:0,female:0"
If ground_truth is provided, use same format for actual outcomes to compute equalized odds and calibration metrics.
Args: predictions: Comma-separated group:prediction pairs (e.g. "male:1,female:0,male:1"). ground_truth: Optional comma-separated group:actual pairs for outcome-based metrics. api_key: Optional MEOK API key for pro tier.
Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.
When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation.
When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| predictions | Yes | ||
| ground_truth | No | ||
| api_key | No |
Implementation Reference
- server.py:549-554 (registration)The 'fairness_metrics' tool is registered as an MCP tool via the @mcp.tool() decorator on FastMCP instance.
@mcp.tool() def fairness_metrics( predictions: str, ground_truth: str = "", api_key: str = "", ) -> dict: - server.py:550-720 (handler)Main handler function for fairness_metrics. Parses comma-separated group:prediction pairs, computes demographic statistics per group, calculates disparate impact ratio (4/5ths rule), statistical parity difference, and optionally equalized odds if ground_truth is provided. Returns a fairness assessment dict with pass/fail verdict and EU AI Act references.
def fairness_metrics( predictions: str, ground_truth: str = "", api_key: str = "", ) -> dict: """Calculate fairness metrics from prediction data. Input format: comma-separated values with group labels. Provide predictions as 'group:prediction' pairs separated by commas. Example: "male:1,female:0,male:1,female:1,male:0,female:0" If ground_truth is provided, use same format for actual outcomes to compute equalized odds and calibration metrics. Args: predictions: Comma-separated group:prediction pairs (e.g. "male:1,female:0,male:1"). ground_truth: Optional comma-separated group:actual pairs for outcome-based metrics. api_key: Optional MEOK API key for pro tier. Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage. When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation. When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice. """ allowed, msg, tier = check_access(api_key) if not allowed: return {"error": msg, "upgrade_url": "https://meok.ai/pricing"} limit_err = _check_rate_limit("fairness_metrics", tier) if limit_err: return {"error": "rate_limited", "message": limit_err} # Parse predictions group_preds = defaultdict(list) # type: Dict[str, List[int]] try: for pair in predictions.split(","): pair = pair.strip() if ":" not in pair: continue group, pred = pair.rsplit(":", 1) group_preds[group.strip().lower()].append(int(pred.strip())) except (ValueError, IndexError): return { "error": "invalid_format", "message": "Use format: group:prediction (e.g. 'male:1,female:0'). Predictions must be 0 or 1.", } if len(group_preds) < 2: return { "error": "insufficient_groups", "message": "Need at least 2 groups for fairness comparison. Found: {}".format(list(group_preds.keys())), } # Calculate selection rates per group group_stats = {} # type: Dict[str, Dict[str, object]] for group, preds in group_preds.items(): positive_rate = sum(preds) / len(preds) if preds else 0 group_stats[group] = { "total": len(preds), "positive": sum(preds), "negative": len(preds) - sum(preds), "positive_rate": round(positive_rate, 4), } # Disparate impact (4/5ths rule) rates = [(g, s["positive_rate"]) for g, s in group_stats.items()] max_rate = max(r[1] for r in rates) if rates else 1 min_rate = min(r[1] for r in rates) if rates else 0 disparate_impact_ratio = round(min_rate / max_rate, 4) if max_rate > 0 else 0.0 passes_four_fifths = disparate_impact_ratio >= 0.8 # Statistical parity difference stat_parity_diff = round(max_rate - min_rate, 4) # Parse ground truth if provided equalized_odds = None if ground_truth: group_actuals = defaultdict(list) # type: Dict[str, List[int]] try: for pair in ground_truth.split(","): pair = pair.strip() if ":" not in pair: continue group, actual = pair.rsplit(":", 1) group_actuals[group.strip().lower()].append(int(actual.strip())) except (ValueError, IndexError): group_actuals = defaultdict(list) if group_actuals and len(group_actuals) >= 2: # Calculate TPR and FPR per group eo_stats = {} # type: Dict[str, Dict[str, float]] for group in group_preds: if group not in group_actuals: continue preds = group_preds[group] actuals = group_actuals[group] n = min(len(preds), len(actuals)) tp = sum(1 for i in range(n) if preds[i] == 1 and actuals[i] == 1) fp = sum(1 for i in range(n) if preds[i] == 1 and actuals[i] == 0) fn = sum(1 for i in range(n) if preds[i] == 0 and actuals[i] == 1) tn = sum(1 for i in range(n) if preds[i] == 0 and actuals[i] == 0) tpr = tp / (tp + fn) if (tp + fn) > 0 else 0.0 fpr = fp / (fp + tn) if (fp + tn) > 0 else 0.0 eo_stats[group] = { "true_positive_rate": round(tpr, 4), "false_positive_rate": round(fpr, 4), "accuracy": round((tp + tn) / n, 4) if n > 0 else 0.0, } if eo_stats: tprs = [s["true_positive_rate"] for s in eo_stats.values()] fprs = [s["false_positive_rate"] for s in eo_stats.values()] equalized_odds = { "group_metrics": eo_stats, "tpr_gap": round(max(tprs) - min(tprs), 4), "fpr_gap": round(max(fprs) - min(fprs), 4), "equalized_odds_satisfied": (max(tprs) - min(tprs)) < 0.1 and (max(fprs) - min(fprs)) < 0.1, } # Overall fairness assessment issues = [] # type: List[str] if not passes_four_fifths: issues.append( "FAILS 4/5ths rule (disparate impact ratio {:.2f} < 0.80) -- " "prima facie evidence of discrimination under US EEOC guidelines".format(disparate_impact_ratio) ) if stat_parity_diff > 0.1: issues.append( "Statistical parity gap {:.2f} exceeds 0.10 threshold -- " "groups receive positive outcomes at significantly different rates".format(stat_parity_diff) ) if equalized_odds and not equalized_odds["equalized_odds_satisfied"]: issues.append( "Equalized odds NOT satisfied -- error rates differ across groups" ) return { "group_statistics": group_stats, "disparate_impact": { "ratio": disparate_impact_ratio, "passes_four_fifths_rule": passes_four_fifths, "highest_rate_group": max(rates, key=lambda x: x[1])[0] if rates else "N/A", "lowest_rate_group": min(rates, key=lambda x: x[1])[0] if rates else "N/A", }, "statistical_parity": { "difference": stat_parity_diff, "acceptable": stat_parity_diff < 0.1, }, "equalized_odds": equalized_odds, "fairness_issues": issues if issues else ["No significant fairness issues detected"], "overall_assessment": "FAIL -- fairness issues detected" if issues else "PASS -- no significant fairness issues", "eu_ai_act_note": ( "Article 10(2)(f) requires examination of training data for biases. " "Article 10(3) requires data to be representative. " "Document these metrics in Annex IV Section 4." ), "next_step": "Use mitigation_recommendations for specific remediation strategies", "meok_labs": "https://meok.ai", } - server.py:58-73 (helper)Rate limiting helper called by fairness_metrics to enforce free tier (10/day) vs pro (unlimited) limits.
def _check_rate_limit(caller="anonymous", tier="free"): # type: (str, str) -> Optional[str] """Returns error string if rate-limited, else None.""" if tier == "pro": return None now = datetime.now() cutoff = now - timedelta(days=1) _usage[caller] = [t for t in _usage[caller] if t > cutoff] if len(_usage[caller]) >= FREE_DAILY_LIMIT: return ( "Free tier limit reached ({}/day). " "Upgrade to MEOK AI Labs Pro for unlimited access at $29/mo: " "https://meok.ai/mcp/bias-detection/pro".format(FREE_DAILY_LIMIT) ) _usage[caller].append(now) return None - server.py:46-49 (helper)Access check helper used by fairness_metrics to validate API key and determine tier (free/pro).
def check_access(api_key=""): # type: (str) -> Tuple[bool, str, str] """Unified access check -- works with or without shared auth engine.""" return _shared_check_access(api_key)