Skip to main content
Glama
CSOAI-ORG

EU AI Act Compliance MCP

generate_documentation

Generate Annex IV compliant technical documentation templates for EU AI Act. Provide system details to receive a structured markdown template.

Instructions

Generate Article 11 / Annex IV compliant technical documentation template.

Produces a complete markdown template following the Annex IV structure of the EU AI Act. Fill in the bracketed sections with your specific information.

Args: system_name: Name of the AI system. provider_name: Legal name of the AI system provider. provider_contact: Provider contact details (address, email, phone). version: System version number/identifier. intended_purpose: Clear description of the system's intended purpose. description: General description of what the system does. data_description: Description of training/validation/testing data used. architecture_description: Description of system architecture and algorithms. performance_metrics: Known accuracy/performance metrics (if available). risk_management_description: Description of risk management measures (if available). human_oversight_description: Description of human oversight measures (if available). caller: Identifier for rate limiting. tier: "free" (10 calls/day) or "pro" (unlimited, $29/mo).

Behavior: This tool generates structured output without modifying external systems. Output is deterministic for identical inputs. No side effects. Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.

When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation.

When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
system_nameYes
provider_nameYes
provider_contactYes
versionYes
intended_purposeYes
descriptionYes
data_descriptionYes
architecture_descriptionYes
performance_metricsNo
risk_management_descriptionNo
human_oversight_descriptionNo
callerNoanonymous
api_keyNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function for the 'generate_documentation' MCP tool. Decorated with @mcp.tool(), it accepts system details as inputs and generates a complete Annex IV-compliant technical documentation template in markdown format, following the 8-section structure required by Article 11 of the EU AI Act.
    @mcp.tool()
    def generate_documentation(
        system_name: str,
        provider_name: str,
        provider_contact: str,
        version: str,
        intended_purpose: str,
        description: str,
        data_description: str,
        architecture_description: str,
        performance_metrics: str = "",
        risk_management_description: str = "",
        human_oversight_description: str = "",
        caller: str = "anonymous",
        api_key: str = "") -> str:
        """Generate Article 11 / Annex IV compliant technical documentation template.
    
        Produces a complete markdown template following the Annex IV structure of the
        EU AI Act. Fill in the bracketed sections with your specific information.
    
        Args:
            system_name: Name of the AI system.
            provider_name: Legal name of the AI system provider.
            provider_contact: Provider contact details (address, email, phone).
            version: System version number/identifier.
            intended_purpose: Clear description of the system's intended purpose.
            description: General description of what the system does.
            data_description: Description of training/validation/testing data used.
            architecture_description: Description of system architecture and algorithms.
            performance_metrics: Known accuracy/performance metrics (if available).
            risk_management_description: Description of risk management measures (if available).
            human_oversight_description: Description of human oversight measures (if available).
            caller: Identifier for rate limiting.
            tier: "free" (10 calls/day) or "pro" (unlimited, $29/mo).
    
        Behavior:
            This tool generates structured output without modifying external systems.
            Output is deterministic for identical inputs. No side effects.
            Free tier: 10/day rate limit. Pro tier: unlimited.
            No authentication required for basic usage.
    
        When to use:
            Use this tool when you need to assess, audit, or verify compliance
            requirements. Ideal for gap analysis, readiness checks, and generating
            compliance documentation.
    
        When NOT to use:
            Do not use as a substitute for qualified legal counsel. This tool
            provides technical compliance guidance, not legal advice.
        """
        allowed, msg, tier = check_access(api_key)
        if not allowed:
            return {"error": msg, "upgrade_url": "https://meok.ai/pricing"}
        limit_err = _check_rate_limit(caller, tier)
        if limit_err:
            return {"error": "rate_limited", "message": limit_err}
    
        date_str = datetime.now().strftime("%Y-%m-%d")
    
        doc = f"""# Technical Documentation — EU AI Act Annex IV
    ## {system_name}
    
    **Provider:** {provider_name}
    **Contact:** {provider_contact}
    **Version:** {version}
    **Document Date:** {date_str}
    **Regulation:** Regulation (EU) 2024/1689 — Article 11, Annex IV
    **Generated by:** MEOK AI Labs EU AI Act Compliance Server (https://meok.ai)
    
    ---
    
    ## 1. General Description of the AI System (Annex IV, Section 1)
    
    ### 1.1 Intended Purpose
    {intended_purpose}
    
    ### 1.2 Provider Information
    - **Provider Name:** {provider_name}
    - **Contact Details:** {provider_contact}
    - **System Version:** {version}
    - **Date of this version:** {date_str}
    - **Previous versions:** [List previous versions and dates]
    
    ### 1.3 System Description
    {description}
    
    ### 1.4 Interaction with External Hardware/Software
    [Describe how the AI system interacts with hardware or software that is not part of the AI system itself, including APIs, data feeds, external services]
    
    ### 1.5 Software/Firmware Requirements
    [List relevant software and firmware versions, plus any version update requirements]
    
    ### 1.6 Forms of Market Placement
    [Describe all forms in which the system is placed on the market or put into service: SaaS, on-premise, embedded, API, etc.]
    
    ### 1.7 Hardware Requirements
    [Describe the hardware on which the AI system is intended to run, including computational requirements]
    
    ### 1.8 User Interface
    [Describe the user interface provided to the deployer, including screenshots or diagrams]
    
    ---
    
    ## 2. Detailed Description of Elements and Development Process (Annex IV, Section 2)
    
    ### 2.1 Development Methods and Steps
    [Describe the methods and steps performed for the development of the AI system, including any use of pre-trained systems or third-party tools/components]
    
    ### 2.2 Design Specifications
    
    #### 2.2.1 General Logic and Algorithms
    {architecture_description}
    
    #### 2.2.2 Key Design Choices and Rationale
    [Document key design choices including algorithmic approach, model architecture, training methodology, and the rationale for each decision]
    
    #### 2.2.3 Classification and Optimisation Approach
    [Describe what the system is designed to optimise for, the relevance of different parameters, and classification methodology]
    
    #### 2.2.4 Expected Output and Interpretation
    [Describe the expected output of the system and how it should be interpreted]
    
    ### 2.3 System Architecture
    [Provide detailed system architecture diagram and explanation of how software components build on or feed into each other]
    
    ### 2.4 Computational Resources
    [Document all computational resources used in development, training, and deployment — including hardware specifications, cloud services, GPU/TPU usage]
    
    ### 2.5 Data Requirements and Documentation
    
    #### 2.5.1 Data Description
    {data_description}
    
    #### 2.5.2 Data Collection Methodology
    [Describe how data was collected, including sources, timeframes, and sampling approaches]
    
    #### 2.5.3 Data Characteristics
    [Document scope, size, format, and key statistical properties of datasets]
    
    #### 2.5.4 Bias Assessment
    [Document assessment of biases in training data and mitigation measures applied]
    
    ### 2.6 Human Oversight Assessment (per Article 14)
    {human_oversight_description if human_oversight_description else "[Describe the human oversight measures needed, as assessed under Article 14. Include how humans can intervene, override, or stop the system.]"}
    
    ### 2.7 Pre-determined Changes
    [Document any pre-determined changes to the system and its performance that have been assessed at the time of the initial conformity assessment]
    
    ---
    
    ## 3. Monitoring, Functioning, and Control (Annex IV, Section 3)
    
    ### 3.1 Capabilities and Limitations
    {performance_metrics if performance_metrics else "[Document the capabilities and limitations of the AI system, including degrees and range of accuracy for specific groups/contexts]"}
    
    ### 3.2 Foreseeable Unintended Outcomes and Risk Sources
    [Identify reasonably foreseeable unintended outcomes and sources of risk to health, safety, and fundamental rights]
    
    ### 3.3 Human Oversight Measures
    {human_oversight_description if human_oversight_description else "[Detail the specific human oversight measures built into or alongside the system]"}
    
    ### 3.4 Input Data Specifications
    [Specify the input data requirements and expected data formats]
    
    ### 3.5 Output Interpretation Guidance
    [Provide information enabling deployers to correctly interpret the AI system's output]
    
    ---
    
    ## 4. Appropriateness of Performance Metrics (Annex IV, Section 4)
    
    ### 4.1 Metrics Used
    {performance_metrics if performance_metrics else "[List all metrics used to measure accuracy, robustness, and compliance with other requirements set out in Article 15]"}
    
    ### 4.2 Testing and Validation Methodology
    [Describe the testing and validation approaches and methodologies used, including information about the test data used and its main characteristics, metrics used to measure accuracy/robustness and any other relevant requirement]
    
    ### 4.3 Performance Declarations
    [Document the expected level of performance and any declarations of conformity]
    
    ---
    
    ## 5. Risk Management System — Article 9 (Annex IV, Section 5)
    
    ### 5.1 Risk Management System Description
    {risk_management_description if risk_management_description else "[Describe the risk management system as required by Article 9, including: identification of known and foreseeable risks, estimation of risks from intended use and foreseeable misuse, evaluation of risks, adoption of mitigation measures]"}
    
    ### 5.2 Development and Post-Development Risk Minimisation
    [Document choices made during and after development to minimise risk, including testing procedures and results]
    
    ---
    
    ## 6. Changes Throughout the Lifecycle (Annex IV, Section 6)
    
    ### 6.1 Pre-determined Changes
    [Document all pre-determined changes to the system throughout its lifecycle]
    
    ### 6.2 Data Governance and Management Practices (Article 10)
    [Describe data governance and management practices, including data collection, data origin, and data scope]
    
    ---
    
    ## 7. EU Declaration of Conformity — Article 47 (Annex IV, Section 7)
    
    [Reference to the EU declaration of conformity as required by Article 47. This section should be completed after conformity assessment.]
    
    - **Conformity Assessment Body (if applicable):** [Name and notified body number]
    - **Conformity Assessment Procedure:** [Self-assessment per Article 43(1) / Third-party assessment per Article 43(2)]
    - **Declaration Reference Number:** [To be assigned]
    
    ---
    
    ## 8. Post-Market Monitoring System — Article 72 (Annex IV, Section 8)
    
    [Describe the post-market monitoring system established pursuant to Article 72, including: monitoring methodology, data collection from deployers, incident reporting procedures, periodic review schedule]
    
    ---
    
    ## Document Control
    
    | Field | Value |
    |-------|-------|
    | Document Owner | {provider_name} |
    | Classification | [Internal/Confidential/Public] |
    | Review Cycle | [Annually/Upon significant change] |
    | Next Review | [Date] |
    | Approval Authority | [Name and role] |
    
    ---
    
    *This template was generated by the MEOK AI Labs EU AI Act Compliance MCP Server.
    It follows the structure required by Annex IV of Regulation (EU) 2024/1689.
    All bracketed sections must be completed with system-specific information.
    This template does not constitute legal advice — consult qualified legal counsel.*
    
    *MEOK AI Labs | https://meok.ai*
    """
    
        return {
            "document_format": "markdown",
            "template": doc,
            "sections_requiring_completion": [
                "1.4 Interaction with External Hardware/Software",
                "1.5 Software/Firmware Requirements",
                "1.6 Forms of Market Placement",
                "1.7 Hardware Requirements",
                "1.8 User Interface",
                "2.1 Development Methods and Steps",
                "2.2.2 Key Design Choices and Rationale",
                "2.2.3 Classification and Optimisation Approach",
                "2.2.4 Expected Output and Interpretation",
                "2.3 System Architecture",
                "2.4 Computational Resources",
                "2.5.2 Data Collection Methodology",
                "2.5.3 Data Characteristics",
                "2.5.4 Bias Assessment",
                "2.7 Pre-determined Changes",
                "3.2 Foreseeable Unintended Outcomes",
                "3.4 Input Data Specifications",
                "4.2 Testing and Validation Methodology",
                "4.3 Performance Declarations",
                "6.1 Pre-determined Changes",
                "6.2 Data Governance",
                "7. EU Declaration of Conformity",
                "8. Post-Market Monitoring System",
            ],
            "compliance_note": "Complete all bracketed sections before submission. Article 11(1) requires documentation to be drawn up before the system is placed on the market.",
            "meok_labs": "https://meok.ai",
        }
  • server.py:1005-1005 (registration)
    The tool is registered via the @mcp.tool() decorator on the generate_documentation function in the FastMCP server instance 'mcp'.
    @mcp.tool()
  • The function signature defines the input schema: system_name, provider_name, provider_contact, version, intended_purpose, description, data_description, architecture_description are required strings; performance_metrics, risk_management_description, human_oversight_description, caller, api_key are optional strings with defaults.
    def generate_documentation(
        system_name: str,
        provider_name: str,
        provider_contact: str,
        version: str,
        intended_purpose: str,
        description: str,
        data_description: str,
        architecture_description: str,
        performance_metrics: str = "",
        risk_management_description: str = "",
        human_oversight_description: str = "",
        caller: str = "anonymous",
        api_key: str = "") -> str:
  • Helper function called by generate_documentation for authentication/access control before generating documentation.
    def check_access(api_key: str = ""):
        """Unified access check — works with or without shared auth engine."""
        return _shared_check_access(api_key)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses that the tool generates structured output without side effects, is deterministic, and specifies rate limits and authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-organized with clear sections and front-loaded purpose, but it is somewhat lengthy. Still, every sentence adds value and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, parameters, behavior, and usage guidance. It mentions output is a markdown template, which is sufficient context for agent selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description includes an 'Args' section that explains every parameter in plain language, compensating for the schema having 0% description coverage. All 13 parameters are clearly defined.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates a technical documentation template compliant with Article 11/Annex IV of the EU AI Act, specifying the resource and action. It is distinct from sibling tools like check_compliance or audit_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'When to use' and 'When NOT to use' sections, guiding the agent to use it for compliance assessment and warning against substituting for legal counsel.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CSOAI-ORG/eu-ai-act-compliance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server