Skip to main content
Glama

audit_ai_bom_completeness

Analyze an AI-BOM JSON for completeness across 10 mandatory categories, receiving per-category pass/fail results and a list of gaps.

Instructions

Audit an existing AI-BOM for completeness against the 10 required field categories. Returns per-category pass/fail + gap list.

Behavior: This tool is read-only and stateless — it produces analysis output without modifying any external systems, databases, or files. Safe to call repeatedly with identical inputs (idempotent). Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.

When to use: Use this tool when you need structured analysis or classification of inputs against established frameworks or standards.

When NOT to use: Not suitable for real-time production decision-making without human review of results.

Args: ai_bom_json (str): The ai bom json to analyze or process. api_key (str): The api key to analyze or process.

Behavioral Transparency: - Side Effects: This tool is read-only and produces no side effects. It does not modify any external state, databases, or files. All output is computed in-memory and returned directly to the caller. - Authentication: No authentication required for basic usage. Pro/Enterprise tiers require a valid MEOK API key passed via the MEOK_API_KEY environment variable. - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are included in responses (X-RateLimit-Remaining, X-RateLimit-Reset). - Error Handling: Returns structured error objects with 'error' key on failure. Never raises unhandled exceptions. Invalid inputs return descriptive validation errors. - Idempotency: Fully idempotent — calling with the same inputs always produces the same output. Safe to retry on timeout or transient failure. - Data Privacy: No input data is stored, logged, or transmitted to external services. All processing happens locally within the MCP server process.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
ai_bom_jsonYes
api_keyNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function that audits an AI-BOM JSON document against the 10 required field categories defined in AI_BOM_REQUIRED_FIELDS. It parses input JSON, checks each category's fields via substring matching (accounting for underscores/spaces), returns per-category COMPLETE/PARTIAL/MISSING status, overall score, and a recommendation.
    @mcp.tool()
    def audit_ai_bom_completeness(ai_bom_json: str, api_key: str = "") -> str:
        """Audit an existing AI-BOM for completeness against the 10 required field categories.
        Returns per-category pass/fail + gap list.
    
        Behavior:
            This tool is read-only and stateless — it produces analysis output
            without modifying any external systems, databases, or files.
            Safe to call repeatedly with identical inputs (idempotent).
            Free tier: 10/day rate limit. Pro tier: unlimited.
            No authentication required for basic usage.
    
        When to use:
            Use this tool when you need structured analysis or classification
            of inputs against established frameworks or standards.
    
        When NOT to use:
            Not suitable for real-time production decision-making without
            human review of results.
    
        Args:
            ai_bom_json (str): The ai bom json to analyze or process.
            api_key (str): The api key to analyze or process.
    
        Behavioral Transparency:
            - Side Effects: This tool is read-only and produces no side effects. It does not modify
              any external state, databases, or files. All output is computed in-memory and returned
              directly to the caller.
            - Authentication: No authentication required for basic usage. Pro/Enterprise tiers
              require a valid MEOK API key passed via the MEOK_API_KEY environment variable.
            - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are
              included in responses (X-RateLimit-Remaining, X-RateLimit-Reset).
            - Error Handling: Returns structured error objects with 'error' key on failure.
              Never raises unhandled exceptions. Invalid inputs return descriptive validation errors.
            - Idempotency: Fully idempotent — calling with the same inputs always produces the
              same output. Safe to retry on timeout or transient failure.
            - Data Privacy: No input data is stored, logged, or transmitted to external services.
              All processing happens locally within the MCP server process.
        """
        allowed, msg, tier = check_access(api_key)
        if not allowed:
            return json.dumps({"error": msg, "upgrade_url": STRIPE_199})
        if err := _rl(tier):
            return json.dumps({"error": err, "upgrade_url": STRIPE_199})
    
        try:
            doc = json.loads(ai_bom_json) if isinstance(ai_bom_json, str) else ai_bom_json
        except Exception as e:
            return json.dumps({"error": f"Invalid JSON: {e}"})
    
        blob = json.dumps(doc).lower()
        results = []
        passed = 0
        for cat, fields in AI_BOM_REQUIRED_FIELDS.items():
            missing = []
            for f in fields:
                if f.lower() not in blob and f.replace("_", "").lower() not in blob and f.replace("_", " ").lower() not in blob:
                    missing.append(f)
            full = len(missing) == 0
            partial = len(missing) < len(fields)
            if full:
                passed += 1
            results.append({
                "category": cat,
                "status": "COMPLETE" if full else "PARTIAL" if partial else "MISSING",
                "missing_fields": missing,
            })
        total = len(AI_BOM_REQUIRED_FIELDS)
        return json.dumps({
            "overall_score_percent": round(passed / total * 100, 1),
            "categories_complete": f"{passed}/{total}",
            "categories_detail": results,
            "recommendation": "Review 'MISSING' and 'PARTIAL' categories. Federal procurement reviewers reject AI-BOMs missing any of the 10 categories." if passed < total else "AI-BOM is complete. Sign with Pro tier for auditor-ready export.",
        }, indent=2)
  • The schema defining the 10 required AI-BOM categories and their expected fields. This is the reference data against which audit_ai_bom_completeness validates.
    AI_BOM_REQUIRED_FIELDS = {
        "model_identity": ["name", "version", "organisation", "licence", "release_date", "model_id_hash"],
        "model_architecture": ["architecture_type", "parameter_count", "context_window", "framework", "training_compute_flops"],
        "training_data": ["dataset_sources", "dataset_sizes", "data_provenance", "filtering_applied", "synthetic_data_percent", "copyright_status"],
        "fine_tuning": ["base_model", "fine_tune_method", "fine_tune_dataset", "fine_tune_steps", "rlhf_applied"],
        "evaluation": ["benchmarks_run", "benchmark_scores", "bias_testing_results", "red_team_findings", "eval_dataset_hash"],
        "dependencies": ["inference_engines", "tokenisers", "safety_filters", "retrieval_systems", "tools_registered"],
        "security_controls": ["prompt_injection_defence", "output_filtering", "pii_scrubbing", "adversarial_robustness_rating"],
        "governance": ["risk_classification", "regulations_applicable", "human_oversight_mechanism", "incident_reporting_contact"],
        "usage_restrictions": ["acceptable_use_policy", "prohibited_use_cases", "export_control_status", "region_restrictions"],
        "distribution": ["distribution_channels", "access_controls", "update_cadence", "decommissioning_policy"],
    }
  • server.py:245-246 (registration)
    The tool is registered as an MCP tool via the @mcp.tool() decorator on line 245, binding it to the FastMCP 'ai-bom' server instance.
    @mcp.tool()
    def audit_ai_bom_completeness(ai_bom_json: str, api_key: str = "") -> str:
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full weight and excels: it covers side effects (read-only, no modifications), authentication (none for basic), rate limits (10/day free), error handling (structured errors), idempotency, and data privacy. This is comprehensive and exceeds typical annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and front-loaded key information. However, there is redundancy between the 'Behavior' and 'Behavioral Transparency' sections, making it slightly longer than necessary. Overall, it is organized and each part serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 params, output schema present), the description covers all essential aspects: input, output (pass/fail + gap list), behavioral traits, rate limits, and limitations. The output schema exists, so not detailing return format is acceptable. It is fully sufficient for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, so the description must compensate. The 'Args' section provides brief descriptions ('The ai bom json to analyze or process') which adds minimal value beyond the title. The overall context helps, but parameter-specific detail is lacking.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool audits an AI-BOM for completeness against 10 required field categories and returns pass/fail and gap list. This specific verb+resource+scope distinguishes it from siblings like generate_ai_bom (creation) and map_to_regulation (mapping).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit 'When to use' and 'When NOT to use' sections, offering context for appropriate usage and cautioning against real-time decisions without human review. However, it does not directly compare to sibling tools, missing a chance to differentiate further.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CSOAI-ORG/ai-bom-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server