Skip to main content
Glama

check_annex_controls

Evaluate an AI system against ISO 42001 Annex A controls to determine applicable controls and implementation status, producing a gap analysis for compliance documentation.

Instructions

Evaluate AI system against ISO 42001 Annex A controls.

Maps the system to all Annex A control objectives and evaluates which controls are applicable and their implementation status. Produces a gap analysis suitable for Statement of Applicability.

Args: system_description: Description of the AI system and its management. system_name: Name of the AI system. implemented_controls: Description of controls already implemented (free text or comma-separated control IDs). caller: Caller identifier for rate limiting. tier: Pricing tier ('free' or 'pro').

Returns: Annex A control evaluation with applicability and gap analysis.

Behavior: This tool generates structured output without modifying external systems. Output is deterministic for identical inputs. No side effects. Free tier: 10/day rate limit. Pro tier: unlimited. No authentication required for basic usage.

When to use: Use this tool when you need to assess, audit, or verify compliance requirements. Ideal for gap analysis, readiness checks, and generating compliance documentation.

When NOT to use: Do not use as a substitute for qualified legal counsel. This tool provides technical compliance guidance, not legal advice. Behavioral Transparency: - Side Effects: This tool is read-only and produces no side effects. It does not modify any external state, databases, or files. All output is computed in-memory and returned directly to the caller. - Authentication: No authentication required for basic usage. Pro/Enterprise tiers require a valid MEOK API key passed via the MEOK_API_KEY environment variable. - Rate Limits: Free tier: 10 calls/day. Pro tier: unlimited. Rate limit headers are included in responses (X-RateLimit-Remaining, X-RateLimit-Reset). - Error Handling: Returns structured error objects with 'error' key on failure. Never raises unhandled exceptions. Invalid inputs return descriptive validation errors. - Idempotency: Fully idempotent — calling with the same inputs always produces the same output. Safe to retry on timeout or transient failure. - Data Privacy: No input data is stored, logged, or transmitted to external services. All processing happens locally within the MCP server process.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
system_descriptionYes
system_nameNoAI System
implemented_controlsNo
callerNoanonymous
tierNofree
api_keyNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description includes a comprehensive 'Behavioral Transparency' section covering side effects (read-only, no side effects), authentication, rate limits, error handling, idempotency, and data privacy. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and front-loaded purpose. However, there is slight redundancy between the 'Behavior:' and 'Behavioral Transparency:' sections, making it a bit longer than necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, no output schema, and no annotations, the description is very complete, covering behavior, usage, parameters, and error handling. Missing a more detailed return format, but acceptable given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description's 'Args:' section explains each parameter in detail (e.g., 'implemented_controls: free text or comma-separated control IDs'), fully compensating for the missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: evaluate AI system against ISO 42001 Annex A controls, mapping controls, assessing applicability, and producing a gap analysis. It distinguishes itself from siblings like 'assess_ai_risk' by focusing on a specific standard.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'When to use' and 'When NOT to use' sections provide clear guidance: use for compliance assessment, gap analysis, and documentation; not a substitute for legal advice. This helps the agent decide when to invoke.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CSOAI-ORG/iso-42001-ai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server