Skip to main content
Glama

Compliance Posture

compliance
Read-onlyIdempotent

Scan MCP configurations and Docker images to evaluate compliance with OWASP LLM Top 10, OWASP MCP Top 10, MITRE ATLAS, and NIST AI RMF, returning per-control pass/warning/fail status and an overall score.

Instructions

Get OWASP LLM Top 10 / OWASP MCP Top 10 / MITRE ATLAS / NIST AI RMF compliance posture.

    Scans local MCP configurations, maps findings to 47 security controls
    across four AI security frameworks, and returns per-control
    pass/warning/fail status with an overall compliance score.

    Args:
        config_path: Path to a specific MCP config directory.
                     If not provided, auto-discovers all local agent configs.
        image: Docker image reference to scan (e.g. "nginx:1.25").

    Returns:
        JSON with overall_score (0-100), overall_status (pass/warning/fail),
        and per-control details for OWASP LLM Top 10 (10 controls),
        OWASP MCP Top 10 (10 controls), MITRE ATLAS (13 techniques),
        and NIST AI RMF (14 subcategories).
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
config_pathNoPath to MCP client config directory. Auto-discovers all if omitted.
imageNoDocker image to scan, e.g. 'nginx:1.25'.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds value by detailing the scanning process (local configs, Docker images) and the output structure with 47 controls across four frameworks. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with a brief intro, Args section, and Returns section, but it is slightly verbose (e.g., listing control counts per framework). Front-loaded with the core purpose. Could be more concise without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-framework compliance scanning) and the existence of an output schema, the description fully explains the return structure (overall_score, overall_status, per-control details). No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters clearly described. The description restates the parameter purposes but does not add significant new meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves compliance posture across four specific AI security frameworks, with verbs like 'Get', 'Scans', 'maps', 'returns'. It distinguishes itself from siblings like 'scan' or 'ai_inventory_scan' by focusing on multi-framework compliance assessment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking compliance against OWASP, MITRE, and NIST frameworks, but it does not explicitly state when to use this tool versus alternatives like 'scan' or 'policy_check'. No exclusion criteria or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/msaad00/agent-bom'

If you have feedback or need assistance with the MCP directory API, please join our Discord server