Skip to main content
Glama

regulatory_check

Identify gaps in bias compliance by evaluating requirements against EU AI Act Article 10 and NIST AI RMF MAP. Supports multiple jurisdictions.

Instructions

Check bias requirements against EU AI Act Article 10 and NIST AI RMF MAP requirements.

Args: jurisdiction: Jurisdiction to check against. Options: eu, us_nist, uk, all. api_key: Optional MEOK API key for pro tier.

Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.

When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation.

When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice. Behavioral Transparency: - Side Effects: This tool is read-only and produces no side effects. It does not modify any external state, databases, or files. All output is computed in-memory and returned directly to the caller. - Authentication: No authentication required for basic usage. Pro/Enterprise tiers require a valid MEOK API key passed via the MEOK_API_KEY environment variable. - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are included in responses (X-RateLimit-Remaining, X-RateLimit-Reset). - Error Handling: Returns structured error objects with 'error' key on failure. Never raises unhandled exceptions. Invalid inputs return descriptive validation errors. - Idempotency: Fully idempotent — calling with the same inputs always produces the same output. Safe to retry on timeout or transient failure. - Data Privacy: No input data is stored, logged, or transmitted to external services. All processing happens locally within the MCP server process.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
jurisdictionNoeu
api_keyNo

Implementation Reference

  • Handler function for the regulatory_check tool. Checks bias requirements against EU AI Act, NIST AI RMF, and UK regulations based on jurisdiction parameter. Returns compliance requirements, assessment date, and bias compliance checklist.
    @mcp.tool()
    def regulatory_check(
        jurisdiction: str = "eu",
        api_key: str = "",
    ) -> dict:
        """Check bias requirements against EU AI Act Article 10 and NIST AI RMF MAP requirements.
    
        Args:
            jurisdiction: Jurisdiction to check against. Options: eu, us_nist, uk, all.
            api_key: Optional MEOK API key for pro tier.
    
        Behavior:
            This tool is read-only and stateless — it produces analysis output
            without modifying any external systems, databases, or files.
            Safe to call repeatedly with identical inputs (idempotent).
            Free tier: 10/day rate limit. Pro tier: unlimited.
            No authentication required for basic usage.
    
        When to use:
            Use this tool when you need to assess, audit, or verify compliance
            requirements. Ideal for gap analysis, readiness checks, and generating
            compliance documentation.
    
        When NOT to use:
            Do not use as a substitute for qualified legal counsel. This tool
            provides technical compliance guidance, not legal advice.
        Behavioral Transparency:
            - Side Effects: This tool is read-only and produces no side effects. It does not modify
              any external state, databases, or files. All output is computed in-memory and returned
              directly to the caller.
            - Authentication: No authentication required for basic usage. Pro/Enterprise tiers
              require a valid MEOK API key passed via the MEOK_API_KEY environment variable.
            - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are
              included in responses (X-RateLimit-Remaining, X-RateLimit-Reset).
            - Error Handling: Returns structured error objects with 'error' key on failure.
              Never raises unhandled exceptions. Invalid inputs return descriptive validation errors.
            - Idempotency: Fully idempotent — calling with the same inputs always produces the
              same output. Safe to retry on timeout or transient failure.
            - Data Privacy: No input data is stored, logged, or transmitted to external services.
              All processing happens locally within the MCP server process.
        """
        allowed, msg, tier = check_access(api_key)
        if not allowed:
            return {"error": msg, "upgrade_url": "https://meok.ai/pricing"}
        limit_err = _check_rate_limit("regulatory_check", tier)
        if limit_err:
            return {"error": "rate_limited", "message": limit_err}
    
        jurisdiction = jurisdiction.strip().lower()
    
        eu_requirements = {
            "framework": "EU AI Act (Regulation (EU) 2024/1689)",
            "key_articles": {
                "Article 10(2)(f)": "Training, validation, and testing datasets shall be examined for possible biases that are likely to affect health and safety or fundamental rights",
                "Article 10(3)": "Datasets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose",
                "Article 10(4)": "Validation and testing datasets shall be appropriate, sufficiently representative, and proportionate",
                "Article 10(5)": "Personal data may be processed for bias detection and correction to the extent strictly necessary (special derogation from GDPR purpose limitation)",
                "Article 9(2)(a)": "Risk management shall include identification and analysis of known and reasonably foreseeable risks including bias",
                "Article 14(4)(b)": "Human overseers shall be aware of automation bias",
                "Article 15(1)": "AI systems shall achieve appropriate levels of accuracy for specific persons or groups",
            },
            "enforcement_date": "2 August 2026 (high-risk systems)",
            "penalty": "Up to EUR 15,000,000 or 3% of global annual turnover for non-compliance",
        }
    
        nist_requirements = {
            "framework": "NIST AI Risk Management Framework 1.0",
            "key_functions": {
                "MAP 2.3": "Scientific integrity and TEVV considerations are identified and documented, including bias measurement",
                "MEASURE 2.6": "AI system performance or assurance criteria are measured, including disparate performance across groups",
                "MEASURE 2.7": "AI system security and resilience, including resistance to bias attacks",
                "MANAGE 2.2": "Mechanisms are in place and applied to sustain value of deployed AI systems, including bias monitoring",
                "GOVERN 1.1": "Policies and procedures reflect risk management priorities including bias and fairness",
            },
            "enforcement": "Voluntary (mandatory for US federal agencies per Executive Order 14110)",
            "penalty": "N/A (framework, not law) but federal procurement may require compliance",
        }
    
        uk_requirements = {
            "framework": "UK AI Regulation (pro-innovation, principles-based)",
            "key_principles": {
                "Fairness": "AI systems should not create unfair discrimination or undermine legal rights",
                "Transparency": "Organisations should be able to explain their AI systems including bias considerations",
                "Contestability": "Individuals should be able to challenge AI decisions affecting them",
                "Safety": "AI systems should function in a robust, secure, and safe way including against bias",
            },
            "enforcement": "Sector-specific regulators (FCA, ICO, CMA, etc.)",
            "penalty": "Varies by sector regulator",
        }
    
        result = {
            "jurisdiction_checked": jurisdiction,
            "assessment_date": datetime.now().isoformat(),
        }  # type: Dict[str, object]
    
        if jurisdiction in ("eu", "all"):
            result["eu_ai_act"] = eu_requirements
        if jurisdiction in ("us_nist", "all"):
            result["nist_ai_rmf"] = nist_requirements
        if jurisdiction in ("uk", "all"):
            result["uk_ai_regulation"] = uk_requirements
    
        if jurisdiction not in ("eu", "us_nist", "uk", "all"):
            return {
                "error": "unknown_jurisdiction",
                "message": "Unknown jurisdiction '{}'. Valid: eu, us_nist, uk, all".format(jurisdiction),
            }
    
        result["bias_compliance_checklist"] = [
            {"check": "Training data examined for biases", "eu_ref": "Article 10(2)(f)", "nist_ref": "MAP 2.3"},
            {"check": "Datasets are representative of deployment population", "eu_ref": "Article 10(3)", "nist_ref": "MEASURE 2.6"},
            {"check": "Fairness metrics calculated and documented", "eu_ref": "Annex IV Section 4", "nist_ref": "MEASURE 2.6"},
            {"check": "Bias mitigation measures applied and documented", "eu_ref": "Article 9", "nist_ref": "MANAGE 2.2"},
            {"check": "Human oversight trained on automation bias", "eu_ref": "Article 14(4)(b)", "nist_ref": "GOVERN 1.1"},
            {"check": "Disaggregated performance metrics reported", "eu_ref": "Article 15(1)", "nist_ref": "MEASURE 2.6"},
            {"check": "Ongoing bias monitoring in production", "eu_ref": "Article 72", "nist_ref": "MANAGE 2.2"},
            {"check": "Bias documented in technical documentation", "eu_ref": "Annex IV Section 2.5.4", "nist_ref": "MAP 2.3"},
        ]
    
        result["meok_labs"] = "https://meok.ai"
        return result
  • server.py:843-843 (registration)
    Tool registration via @mcp.tool() decorator on the FastMCP instance named 'AI Bias Detection'.
    @mcp.tool()
  • Access control helper called at start of regulatory_check to validate API key and determine tier.
    def check_access(api_key=""):
        # type: (str) -> Tuple[bool, str, str]
        """Unified access check -- works with or without shared auth engine."""
        return _shared_check_access(api_key)
  • Rate limiting helper called by regulatory_check to enforce free tier daily limit (10 calls/day).
    def _check_rate_limit(caller="anonymous", tier="free"):
        # type: (str, str) -> Optional[str]
        """Returns error string if rate-limited, else None."""
        if tier == "pro":
            return None
        now = datetime.now()
        cutoff = now - timedelta(days=1)
        _usage[caller] = [t for t in _usage[caller] if t > cutoff]
        if len(_usage[caller]) >= FREE_DAILY_LIMIT:
            return (
                "Free tier limit reached ({}/day). "
                "Upgrade to MEOK AI Labs Pro for unlimited access at $29/mo: "
                "https://meok.ai/mcp/bias-detection/pro".format(FREE_DAILY_LIMIT)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description carries full burden and excels. It details side effects (read-only, stateless, idempotent), authentication (no auth for basic, API key for pro), rate limits (10/day free, unlimited pro, with headers), error handling (structured errors), and data privacy. This is exhaustive and enables safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with a clear purpose statement and well-organized into sections. However, it is somewhat verbose, with some redundancy between the 'Behavior' and 'Behavioral Transparency' sections. The effective structure earns a high score, but conciseness could be improved by merging overlapping content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 optional parameters, no output schema), the description provides comprehensive behavioral and usage information. The only minor gap is the lack of explicit output format description; while it mentions 'analysis output', specifying the structure would make it complete. Still, it fully enables correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining the jurisdiction parameter with its options (eu, us_nist, uk, all) and describing the api_key as optional for pro tier. While helpful, it could have enumerated the jurisdiction options more explicitly or noted default behavior beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: checking bias requirements against EU AI Act Article 10 and NIST AI RMF MAP requirements. It specifies supported jurisdictions (eu, us_nist, uk, all) and distinguishes itself from sibling tools by focusing on compliance assessment, audit, and verification, whereas siblings like detect_bias or fairness_metrics serve different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit 'When to use' and 'When NOT to use' sections, guiding agents to use this tool for gap analysis, readiness checks, and compliance documentation, and cautioning against its use as a substitute for legal counsel. However, it does not directly compare with sibling tools or specify alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CSOAI-ORG/bias-detection-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server