Skip to main content
Glama
ComplianceCow

ComplianceCow MCP Server

execute_action

Trigger actions on assessment runs, control runs, or evidence levels to automate compliance remediation, requiring explicit user confirmation before modifying system state.

Instructions

Use this tool when the user asks about actions such as create, update or other action-related queries.

IMPORTANT: This tool MUST ONLY be executed after explicit user confirmation. Always prompt for REQUIRED-FROM-USER field from user and get inputs from user. Always confirm the inputs below execute action. Always describe the intended action and its effects to the user, then wait for their explicit approval before proceeding. Do not execute this tool without clear user consent, as it performs actual operations that modify system state.

Execute or trigger a specific action on an assessment run. use assessment id, assessment run id and action binding id. Execute or trigger a specific action on an control run. use assessment id, assessment run id, action binding id and assessment run control id . Execute or trigger a specific action on an evidence level. use assessment id, assessment run id, action binding id, assessment run control evidence id and evidence record ids. Use fetch assessment available actions to get action binding id. Only once action can be triggered at a time, assessment level or control level or evidence level based on user preference. Use this to trigger action for assessment level or control level or evidence level. Please also provide the intended effect when executing actions. For inputs use default value as sample, based on that generate the inputs for the action. Format key - inputName value - inputValue. If inputs are provided, Always ensure to show all inputs to the user before executing the action, and also user to make changes to the inputs and also confirm modified inputs before executing the action.

WORKFLOW:

  1. First fetch the available actions based on user preference assessment level or control level or evidence level

  2. Present the available actions to the user

  3. Ask user to confirm which specific action they want to execute

  4. Explain what the action will do and its expected effects

  5. Wait for explicit user confirmation before calling this tool

  6. Only then execute the action with this tool

Args: - assessmentId - assessmentRunId - actionBindingId - assessmentRunControlId - needed for control level action - assessmentRunControlEvidenceId - needed for evidence level action - evidenceRecordIds - needed for evidence level action - inputs (Optional[dict[str, Any]]): Additional inputs for the action, if required by the action's rules.

Returns: - id (str): id of triggered action.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
assessmentIdYes
assessmentRunIdYes
actionBindingIdYes
assessmentRunControlIdNo
assessmentRunControlEvidenceIdNo
evidenceRecordIdsNo
inputsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
idNo
errorNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses destructive state modification, requires explicit user confirmation, and notes single-action constraint despite no annotations provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Repetitive phrasing ('Execute or trigger' stated thrice) and scattered constraints; WORKFLOW section helps but overall text could be 30% shorter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers complex multi-level targeting (assessment/control/evidence), references required sibling tool, and acknowledges output return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Comprehensively compensates for 0% schema coverage by documenting when each optional parameter is required (control vs evidence level contexts) and explaining inputs structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly specifies triggering actions on assessment/control/evidence levels, though opening 'create, update' phrasing is slightly misleading before clarifying specific domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 6-step WORKFLOW, mandatory confirmation requirement, and clear directive to use 'fetch_assessment_available_actions' sibling first provide strong usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ComplianceCow/cow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server