Skip to main content
Glama
ComplianceCow

ComplianceCow MCP Server

attach_rule_to_control

Attach compliance rules to assessment controls to automate evidence generation. Validates control existence and rule publication status before creating the link.

Instructions

Attach a rule to a specific control in an assessment.

🚨 CRITICAL EXECUTION BLOCKERS — DO NOT SKIP 🚨 Before any part of this tool can run, five preconditions MUST be met:

  1. Control Verification:

  • You MUST verify the control exists in the assessment by calling verify_control_in_assessment().

  • Verification must confirm the control is present, valid, and a leaf control.

  • If verification fails → STOP immediately. Do not proceed.

  1. Rule ID Resolution:

  • If rule_id is a valid UUID → proceed.

  • If rule_id is an alphabetic string → treat it as the rule name and resolve it to a UUID using fetch_cc_rule_by_name().

  • If resolution fails or rule_id is still not a UUID after this step → STOP immediately.

  • Execution is STRICTLY PROHIBITED with a plain name.

  1. Rule Publish Validation:

  • You MUST check if the rule is published in ComplianceCow before proceeding.

  • If the rule is not published → STOP immediately.

  • Published status is a hard requirement for attachment.

  1. Evidence Creation Acknowledgment:

  • Before proceeding, you MUST request confirmation from the user about create_evidence.

  • Ask: "Do you want to auto-generate evidence from the rule output? (default: True)"

  • Only proceed after the user explicitly acknowledges their choice.

  1. Override Acknowledgment:

  • If the control already has a rule attached, you MUST request user confirmation before overriding.

  • Ask: "This control already has a rule attached. Do you want to override it? (yes/no)"

  • Only proceed if the user explicitly confirms.

RULE ATTACHMENT WORKFLOW:

  1. Perform control verification using verify_control_in_assessment() (MANDATORY).

  2. Resolve rule_id using the CRITICAL EXECUTION BLOCKERS above (use fetch_cc_rule_by_name() when needed).

  3. Validate that the rule is published in ComplianceCow.

  4. Confirm evidence creation preference from the user (acknowledgment REQUIRED).

  5. Check for existing rule attachments and request override acknowledgment if needed.

  6. Attach rule to control.

  7. Optionally create evidence for the control.

ATTACHMENT OPTIONS:

  • create_evidence: Whether to create evidence along with rule attachment. Must be confirmed by the user before proceeding.

VALIDATION REQUIREMENTS:

  • Control must be verified and confirmed as a leaf control.

  • Rule must be published.

  • Rule ID must be a valid UUID.

  • Assessment and control must exist.

  • User must acknowledge override before replacing an existing rule.

Args: rule_id: ID of the rule to attach (UUID). If an alphabetic string is provided, it MUST be resolved to a UUID using fetch_cc_rule_by_name() before the tool proceeds. assessment_name: Name of the assessment. control_id: ID of the control. create_evidence: Whether to create auto-generated evidence from the rule output (default: True). āš ļø MUST be confirmed by user acknowledgment before execution.

Returns: Dict containing attachment status and details.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
rule_idYes
assessment_nameYes
control_idYes
create_evidenceNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description fully compensates by disclosing destructive override behavior, mandatory user acknowledgments, side effects (evidence creation), and strict preconditions that prevent execution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely verbose with heavy formatting (emojis, caps, bold) and repetitive sections (Critical Blockers vs Workflow), though appropriately front-loaded with safety warnings for a high-risk operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of complex orchestration requirements and multi-step workflow; brief mention of return Dict is acceptable given existence of output schema, though error conditions could be noted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates for 0% schema coverage by detailing that rule_id accepts alphabetic strings requiring resolution to UUID, and that create_evidence requires explicit user confirmation, though assessment_name and control_id receive minimal semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource ('Attach a rule to a specific control') and clearly distinguishes from sibling tools like verify_control_in_assessment or fetch_cc_rule_by_name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists five 'CRITICAL EXECUTION BLOCKERS' with stop conditions, references required prerequisite tools (verify_control_in_assessment, fetch_cc_rule_by_name), and details when to request user confirmation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ComplianceCow/cow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server