regulatory_check
Identify gaps in bias compliance by evaluating requirements against EU AI Act Article 10 and NIST AI RMF MAP. Supports multiple jurisdictions.
Instructions
Check bias requirements against EU AI Act Article 10 and NIST AI RMF MAP requirements.
Args: jurisdiction: Jurisdiction to check against. Options: eu, us_nist, uk, all. api_key: Optional MEOK API key for pro tier.
Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.
When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation.
When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice. Behavioral Transparency: - Side Effects: This tool is read-only and produces no side effects. It does not modify any external state, databases, or files. All output is computed in-memory and returned directly to the caller. - Authentication: No authentication required for basic usage. Pro/Enterprise tiers require a valid MEOK API key passed via the MEOK_API_KEY environment variable. - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are included in responses (X-RateLimit-Remaining, X-RateLimit-Reset). - Error Handling: Returns structured error objects with 'error' key on failure. Never raises unhandled exceptions. Invalid inputs return descriptive validation errors. - Idempotency: Fully idempotent — calling with the same inputs always produces the same output. Safe to retry on timeout or transient failure. - Data Privacy: No input data is stored, logged, or transmitted to external services. All processing happens locally within the MCP server process.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| jurisdiction | No | eu | |
| api_key | No |
Implementation Reference
- server.py:843-963 (handler)Handler function for the regulatory_check tool. Checks bias requirements against EU AI Act, NIST AI RMF, and UK regulations based on jurisdiction parameter. Returns compliance requirements, assessment date, and bias compliance checklist.
@mcp.tool() def regulatory_check( jurisdiction: str = "eu", api_key: str = "", ) -> dict: """Check bias requirements against EU AI Act Article 10 and NIST AI RMF MAP requirements. Args: jurisdiction: Jurisdiction to check against. Options: eu, us_nist, uk, all. api_key: Optional MEOK API key for pro tier. Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage. When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation. When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice. Behavioral Transparency: - Side Effects: This tool is read-only and produces no side effects. It does not modify any external state, databases, or files. All output is computed in-memory and returned directly to the caller. - Authentication: No authentication required for basic usage. Pro/Enterprise tiers require a valid MEOK API key passed via the MEOK_API_KEY environment variable. - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are included in responses (X-RateLimit-Remaining, X-RateLimit-Reset). - Error Handling: Returns structured error objects with 'error' key on failure. Never raises unhandled exceptions. Invalid inputs return descriptive validation errors. - Idempotency: Fully idempotent — calling with the same inputs always produces the same output. Safe to retry on timeout or transient failure. - Data Privacy: No input data is stored, logged, or transmitted to external services. All processing happens locally within the MCP server process. """ allowed, msg, tier = check_access(api_key) if not allowed: return {"error": msg, "upgrade_url": "https://meok.ai/pricing"} limit_err = _check_rate_limit("regulatory_check", tier) if limit_err: return {"error": "rate_limited", "message": limit_err} jurisdiction = jurisdiction.strip().lower() eu_requirements = { "framework": "EU AI Act (Regulation (EU) 2024/1689)", "key_articles": { "Article 10(2)(f)": "Training, validation, and testing datasets shall be examined for possible biases that are likely to affect health and safety or fundamental rights", "Article 10(3)": "Datasets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose", "Article 10(4)": "Validation and testing datasets shall be appropriate, sufficiently representative, and proportionate", "Article 10(5)": "Personal data may be processed for bias detection and correction to the extent strictly necessary (special derogation from GDPR purpose limitation)", "Article 9(2)(a)": "Risk management shall include identification and analysis of known and reasonably foreseeable risks including bias", "Article 14(4)(b)": "Human overseers shall be aware of automation bias", "Article 15(1)": "AI systems shall achieve appropriate levels of accuracy for specific persons or groups", }, "enforcement_date": "2 August 2026 (high-risk systems)", "penalty": "Up to EUR 15,000,000 or 3% of global annual turnover for non-compliance", } nist_requirements = { "framework": "NIST AI Risk Management Framework 1.0", "key_functions": { "MAP 2.3": "Scientific integrity and TEVV considerations are identified and documented, including bias measurement", "MEASURE 2.6": "AI system performance or assurance criteria are measured, including disparate performance across groups", "MEASURE 2.7": "AI system security and resilience, including resistance to bias attacks", "MANAGE 2.2": "Mechanisms are in place and applied to sustain value of deployed AI systems, including bias monitoring", "GOVERN 1.1": "Policies and procedures reflect risk management priorities including bias and fairness", }, "enforcement": "Voluntary (mandatory for US federal agencies per Executive Order 14110)", "penalty": "N/A (framework, not law) but federal procurement may require compliance", } uk_requirements = { "framework": "UK AI Regulation (pro-innovation, principles-based)", "key_principles": { "Fairness": "AI systems should not create unfair discrimination or undermine legal rights", "Transparency": "Organisations should be able to explain their AI systems including bias considerations", "Contestability": "Individuals should be able to challenge AI decisions affecting them", "Safety": "AI systems should function in a robust, secure, and safe way including against bias", }, "enforcement": "Sector-specific regulators (FCA, ICO, CMA, etc.)", "penalty": "Varies by sector regulator", } result = { "jurisdiction_checked": jurisdiction, "assessment_date": datetime.now().isoformat(), } # type: Dict[str, object] if jurisdiction in ("eu", "all"): result["eu_ai_act"] = eu_requirements if jurisdiction in ("us_nist", "all"): result["nist_ai_rmf"] = nist_requirements if jurisdiction in ("uk", "all"): result["uk_ai_regulation"] = uk_requirements if jurisdiction not in ("eu", "us_nist", "uk", "all"): return { "error": "unknown_jurisdiction", "message": "Unknown jurisdiction '{}'. Valid: eu, us_nist, uk, all".format(jurisdiction), } result["bias_compliance_checklist"] = [ {"check": "Training data examined for biases", "eu_ref": "Article 10(2)(f)", "nist_ref": "MAP 2.3"}, {"check": "Datasets are representative of deployment population", "eu_ref": "Article 10(3)", "nist_ref": "MEASURE 2.6"}, {"check": "Fairness metrics calculated and documented", "eu_ref": "Annex IV Section 4", "nist_ref": "MEASURE 2.6"}, {"check": "Bias mitigation measures applied and documented", "eu_ref": "Article 9", "nist_ref": "MANAGE 2.2"}, {"check": "Human oversight trained on automation bias", "eu_ref": "Article 14(4)(b)", "nist_ref": "GOVERN 1.1"}, {"check": "Disaggregated performance metrics reported", "eu_ref": "Article 15(1)", "nist_ref": "MEASURE 2.6"}, {"check": "Ongoing bias monitoring in production", "eu_ref": "Article 72", "nist_ref": "MANAGE 2.2"}, {"check": "Bias documented in technical documentation", "eu_ref": "Annex IV Section 2.5.4", "nist_ref": "MAP 2.3"}, ] result["meok_labs"] = "https://meok.ai" return result - server.py:843-843 (registration)Tool registration via @mcp.tool() decorator on the FastMCP instance named 'AI Bias Detection'.
@mcp.tool() - server.py:46-49 (helper)Access control helper called at start of regulatory_check to validate API key and determine tier.
def check_access(api_key=""): # type: (str) -> Tuple[bool, str, str] """Unified access check -- works with or without shared auth engine.""" return _shared_check_access(api_key) - server.py:58-70 (helper)Rate limiting helper called by regulatory_check to enforce free tier daily limit (10 calls/day).
def _check_rate_limit(caller="anonymous", tier="free"): # type: (str, str) -> Optional[str] """Returns error string if rate-limited, else None.""" if tier == "pro": return None now = datetime.now() cutoff = now - timedelta(days=1) _usage[caller] = [t for t in _usage[caller] if t > cutoff] if len(_usage[caller]) >= FREE_DAILY_LIMIT: return ( "Free tier limit reached ({}/day). " "Upgrade to MEOK AI Labs Pro for unlimited access at $29/mo: " "https://meok.ai/mcp/bias-detection/pro".format(FREE_DAILY_LIMIT)