Skip to main content
Glama
ComplianceCow

ComplianceCow MCP Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
CCOW_HOSTYesThe hostname of the ComplianceCow instance (e.g., 'https://partner.compliancecow.live').https://partner.compliancecow.live
CCOW_CLIENT_IDYesYour OAuth 2.0 client ID. Obtain this by clicking on 'Manage Client Credentials' in the top-right user profile menu of your ComplianceCow instance.
CCOW_CLIENT_SECRETYesYour OAuth 2.0 client secret. Obtain this by clicking on 'Manage Client Credentials' in the top-right user profile menu of your ComplianceCow instance.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
read_fileB

Read content from a local file given a file:// URI or file path.

Args: uri: File URI (file://) or local file path to read max_chars: Maximum characters to return (default: 8000, roughly 2000 tokens)

Returns: Dictionary containing file content or error message

read_resourceA

Read content from a resource URI (primarily for local files).

Args: uri: Resource URI to read max_chars: Maximum characters to return (default: 8000, roughly 2000 tokens)

Returns: Dictionary containing resource content or error message

list_all_assessment_categoriesB

Get all assessment categories

Returns: - categories (List[Category]): A list of category objects, where each category includes: - id (str): Unique identifier of the assessment category. - name (str): Name of the category. - error (Optional[str]): An error message if any issues occurred during retrieval.

list_assessmentsC

Get all assessments Args: categoryId: assessment category id (Optional) categoryName: assessment category name (Optional) assessmentName: assessment name (Optional) Returns: - assessments (List[Assessments]): A list of assessments objects, where each assessment includes: - id (str): Unique identifier of the assessment. - name (str): Name of the assessment. - category_name (str): Name of the category. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_unique_node_data_and_schemaD

Fetch unique node data and schema

execute_cypher_queryA

Given a question and query, execute a cypher query and transform result to human readable format.

This tool queries a Neo4j graph database containing compliance controls, frameworks, and evidence.

Key database structure: - Controls have hierarchical relationships via HAS_CHILD edges - Evidence nodes are attached to leaf controls (controls with no children) - Use recursive patterns [HAS_CHILD*] for traversing control hierarchies - Controls may have multiple levels of nesting - Evidence contains records - RiskItem nodes are attached to control-config via HAS_RISK & HAS_MAPPED_CONTROL edges - RiskItemAttribute nodes are attached to RiskItem via HAS_ATTRIBUTE edges - RiskItem contains RiskItemAttributes

Query guidelines: - For control hierarchies: Use MATCH (parent)-[HAS_CHILD*]->(child) patterns - For evidence: Evidence is only available on leaf controls (Always check last child of control for evidence) (no outgoing HAS_CHILD relationships) - For control depth: Calculate hierarchy depth when analyzing control structures - Use APOC procedures for complex graph operations when available - While list assessment run always include assessment name - For large datasets from query: Provide overview summary & suggest refinement suggestion

Args: query (str): The Cypher query to execute against the graph database.

Returns: - result (Any): The formatted, human-readable result of the Cypher query. - error (Optional[str]): An error message if the query execution fails or encounters issues.

Example queries: - Find all root controls: MATCH (c:Control) WHERE NOT ()-[:HAS_CHILD]->(c) RETURN c - Get control hierarchy: MATCH (root)-[:HAS_CHILD*]->(leaf) RETURN root, leaf - Find evidence for controls (leaf control): MATCH (c:Control)-[:HAS_EVIDENCE]->(e:Evidence) RETURN c, e - Find leaf control: MATCH (c:Control) WHERE NOT (c)-[:HAS_CHILD]->(:Control) RETURN c - Find records: MATCH (e:Evidence)-[:HAS_RECORD]-(:Record) RETURN e

fetch_recent_assessment_runsC

Get recent assessment run for given assessment id

Args: - id (str): assessment id

Returns: - assessmentRuns (List[AssessmentRuns]): A list of assessment runs. - id (str): Assessement run id. - name (str): Name of the assessement run. - description (str): Description of the assessment run. - assessmentId (str): Assessement id. - applicationType (str): Application type. - configId (str): Configuration id. - fromDate (str): From date of the assessement run. - toDate (str): To date of the assessment run. - status (str): Status of the assessment run. - computedScore (str): Computed score. - computedWeight (str): Computed weight. - complianceStatus (str): Compliance status. - compliancePCT (str): Compliance percentage. - complianceWeight (str): Compliance weight. - createdAt (str): Time and date when the assessement run was created. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_assessment_runsA

Get all assessment run for given assessment id Function accepts page number (page) and page size (pageSize) for pagination. If MCP client host unable to handle large response use page and pageSize, default page is 1 If the request times out retry with pagination, increasing pageSize from 5 to 10. use this tool when expected run is got in fetch recent assessment runs tool

Args: - id (str): Assessment id

Returns: - assessmentRuns (List[AssessmentRuns]): A list of assessment runs. - id (str): Assessement run id. - name (str): Name of the assessement run. - description (str): Description of the assessment run. - assessmentId (str): Assessement id. - applicationType (str): Application type. - configId (str): Configuration id. - fromDate (str): From date of the assessement run. - toDate (str): To date of the assessment run. - status (str): Status of the assessment run. - computedScore (str): Computed score. - computedWeight (str): Computed weight. - complianceStatus (str): Compliance status. - compliancePCT (str): Compliance percentage. - complianceWeight (str): Compliance weight. - createdAt (str): Time and date when the assessement run was created. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_assessment_run_detailsB

Get assessment run details for given assessment run id. This api will return many contorls, use page to get details pagewise. If output is large store it in a file.

Args: - id (str): Assessment run id

Returns: - controls (List[Control]): A list of controls. - id (str): Control run id. - name (str): Control name. - controlNumber (str): Control number. - alias (str): Control alias. - priority (str): Priority. - stage (str): Control stage. - status (str): Control status. - type (str): Control type. - executionStatus (str): Rule execution status. - dueDate (str): Due date. - assignedTo (List[str]): Assigned user ids - assignedBy (str): Assigner's user id. - assignedDate (str): Assigned date. - checkedOut (bool): Control checked-out status. - compliancePCT__ (str): Compliance percentage. - complianceWeight__ (str): Compliance weight. - complianceStatus (str): Compliance status. - createdAt (str): Time and date when the control run was created. - updatedAt (str): Time and date when the control run was updated. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_assessment_run_leaf_controlsB

Get leaf controls for given assessment run id. If output is large store it in a file.

Args: - id (str): Assessment run id

Returns: - controls (List[Control]): A list of controls. - id (str): Control run id. - name (str): Control name. - controlNumber (str): Control number. - alias (str): Control alias. - priority (str): Priority. - stage (str): Control stage. - status (str): Control status. - type (str): Control type. - executionStatus (str): Rule execution status. - dueDate (str): Due date. - assignedTo (List[str]): Assigned user ids - assignedBy (str): Assigner's user id. - assignedDate (str): Assigned date. - checkedOut (bool): Control checked-out status. - compliancePCT__ (str): Compliance percentage. - complianceWeight__ (str): Compliance weight. - complianceStatus (str): Compliance status. - createdAt (str): Time and date when the control run was created. - updatedAt (str): Time and date when the control run was updated. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_run_controlsC

use this tool when you there is no result from the tool "execute_cypher_query". use this tool to get all controls that matches the given name. Next use fetch control meta data tool if need assessment name, assessment Id, assessment run name, assessment run Id

Args: - name (str): Control name

Returns: - controls (List[Control]): A list of controls. - id (str): Control run id. - name (str): Control name. - controlNumber (str): Control number. - alias (str): Control alias. - priority (str): Priority. - stage (str): Control stage. - status (str): Control status. - type (str): Control type. - executionStatus (str): Rule execution status. - dueDate (str): Due date. - assignedTo (List[str]): Assigned user ids - assignedBy (str): Assigner's user id. - assignedDate (str): Assigned date. - checkedOut (bool): Control checked-out status. - compliancePCT__ (str): Compliance percentage. - complianceWeight__ (str): Compliance weight. - complianceStatus (str): Compliance status. - createdAt (str): Time and date when the control run was created. - updatedAt (str): Time and date when the control run was updated. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_run_control_meta_dataB

Use this tool to retrieve control metadata for a given control_id, including:

  • Control details: control name

  • Assessment details: assessment name and ID

  • Assessment run details: assessment run name and ID

Args: - id (str): Control id

Returns: - assessmentId (str): Assessment id. - assessmentName (str): Assessment name. - assessmentRunId (str): Assessment run id. - assessmentRunName (str): Assessment run name. - controlId (str): Control id. - controlName (str): Control name. - controlNumber (str): Control number. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_assessment_run_leaf_control_evidenceB

Get leaf control evidence for given assessment run control id.

Args:

  • id (str): Assessment run control id

Returns: - evidences (List[ControlEvidenceVO]): List of control evidences - id (str): Evidence id. - name (str): Evidence name. - description (str): Evidence description. - fileName (str): File name. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_controlsC

To fetch controls. Args: control_name (str): name of the control.

Returns: - prompt (str): The input prompt used to generate the Cypher query for fetching the control.

fetch_evidence_recordsA

Get evidence records for a given evidence ID with optional compliance status filtering. Returns max 50 records but counts all records for the summary.

Args: - id (str): Evidence ID - compliantStatus Optional[(str)]: Compliance status to filter "COMPLIANT", "NON_COMPLIANT", "NOT_DETERMINED" (optional).

Returns: - totalRecords (int): Total records. - compliantRecords (int): Number of complian records. - nonCompliantRecords (int): Number of non compliant records. - notDeterminedRecords (int): Number of not determined records. - records (List[RecordListVO]): List of evidence records. - id (str): Record id. - name (str): System name. - source (str): Record source. - resourceId (str): Resource id. - resourceName (str): Resource name. - resourceType (str): Resource type. - complianceStatus (str): Compliance status. - complianceReason (str): Compliance reason. - createdAt (str): The date and time the record was initially created.
- otherInfo (Any): Additional information.
- error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_evidence_record_schemaC

Get evidence record schema for a given evidence ID. Returns the schema of evidence record.

Args: - id (str): Evidence ID

Returns: - records (List[RecordListVO]): List of evidence record schema. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_available_control_actionsB

This tool should be used for handling control-related actions such as create, update, or to retrieve available actions for a given control.

If no control details are given use the tool "fetch_controls" to get the control details.

  1. Fetch the available actions.

  2. Prompt the user to confirm the intended action.

  3. Once confirmed, use the execute_action tool with the appropriate parameters to carry out the operation.

Args:

  • assessmentName (str): Name of the assessment (required)

  • controlNumber (str): Identifier for the control (required)

  • controlAlias (str): Alias of the control (required)

If the above arguments are not available:

  • Use the fetch_controls tool to retrieve control details.

  • Then generate and execute a query to fetch the related assessment information before proceeding.

Returns: - actions (List[ActionsVO]): List of actions - actionName (str): Action name. - actionDescription (str): Action description. - actionSpecID (str): Action specific id. - actionBindingID (str): Action binding id. - target (str): Target. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_assessment_available_actionsA

Get actions available on assessment for given assessment name. Once fetched, ask user to confirm to execute the action, then use 'execute_action' tool with appropriate parameters to execute the action. Args:

  • name (str): Assessment name

Returns: - actions (List[ActionsVO]): List of actions - actionName (str): Action name. - actionDescription (str): Action description. - actionSpecID (str): Action specific id. - actionBindingID (str): Action binding id. - target (str): Target. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_evidence_available_actionsA

Get actions available on evidence for given evidence name. If the required parameters are not provided, use the existing tools to retrieve them. Once fetched, ask user to confirm to execute the action, then use 'execute_action' tool with appropriate parameters to execute the action. Args: - assessment_name (str): assessment name (required) - control_number (str): control number (required) - control_alias (str): control alias (required)
- evidence_name (str): evidence name (required)

Returns: - actions (List[ActionsVO]): List of actions - actionName (str): Action name. - actionDescription (str): Action description. - actionSpecID (str): Action specific id. - actionBindingID (str): Action binding id. - target (str): Target. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_general_available_actionsA

Get general actions available on assessment, control & evidence. Once fetched, ask user to confirm to execute the action, then use 'execute_action' tool with appropriate parameters to execute the action. For inputs use default value as sample, based on that generate the inputs for the action. Args: - type (str): Type of the action, can be "assessment", "control" or "evidence".

Returns: - actions (List[ActionsVO]): List of actions - actionName (str): Action name. - actionDescription (str): Action description. - actionSpecID (str): Action specific id. - actionBindingID (str): Action binding id. - target (str): Target. - ruleInputs: Optional[dict[str, Any]]: Rule inputs for the action, if applicable. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_automated_controls_of_an_assessmentB

To fetch the only the automated controls for a given assessment. If assessment_id is not provided use other tools to get the assessment and its id.

Args: - assessment_id (str, required): Assessment id or plan id.

Returns: - controls (List[AutomatedControlVO]): List of controls - id (str): Control ID. - displayable (str): Displayable name or label. - alias (str): Alias of the control. - activationStatus (str): Activation status. - ruleName (str): Associated rule name. - assessmentId (str): Assessment identifier. - error (Optional[str]): An error message if any issues occurred during retrieval.

execute_actionA

Use this tool when the user asks about actions such as create, update or other action-related queries.

IMPORTANT: This tool MUST ONLY be executed after explicit user confirmation. Always prompt for REQUIRED-FROM-USER field from user and get inputs from user. Always confirm the inputs below execute action. Always describe the intended action and its effects to the user, then wait for their explicit approval before proceeding. Do not execute this tool without clear user consent, as it performs actual operations that modify system state.

Execute or trigger a specific action on an assessment run. use assessment id, assessment run id and action binding id. Execute or trigger a specific action on an control run. use assessment id, assessment run id, action binding id and assessment run control id . Execute or trigger a specific action on an evidence level. use assessment id, assessment run id, action binding id, assessment run control evidence id and evidence record ids. Use fetch assessment available actions to get action binding id. Only once action can be triggered at a time, assessment level or control level or evidence level based on user preference. Use this to trigger action for assessment level or control level or evidence level. Please also provide the intended effect when executing actions. For inputs use default value as sample, based on that generate the inputs for the action. Format key - inputName value - inputValue. If inputs are provided, Always ensure to show all inputs to the user before executing the action, and also user to make changes to the inputs and also confirm modified inputs before executing the action.

WORKFLOW:

  1. First fetch the available actions based on user preference assessment level or control level or evidence level

  2. Present the available actions to the user

  3. Ask user to confirm which specific action they want to execute

  4. Explain what the action will do and its expected effects

  5. Wait for explicit user confirmation before calling this tool

  6. Only then execute the action with this tool

Args: - assessmentId - assessmentRunId - actionBindingId - assessmentRunControlId - needed for control level action - assessmentRunControlEvidenceId - needed for evidence level action - evidenceRecordIds - needed for evidence level action - inputs (Optional[dict[str, Any]]): Additional inputs for the action, if required by the action's rules.

Returns: - id (str): id of triggered action.

list_assetsB

Retrieve all available assets (integration plans).

Returns: - success (bool): Indicates if the operation completed successfully. - assets (List[dict]): A list of assets. - id (str): Asset id. - name (str): Name of the asset. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_assets_summaryC

Get assets summary for given assessment id

Args: - id (str): Assessment id

Returns: - integrationRunId (str): Asset id. - assessmentName (str): Name of the asset. - status (str): Name of the asset. - numberOfResources (str): Name of the asset. - numberOfChecks (str): Name of the asset. - dataStatus (str): Name of the asset. - createdAt (str): Name of the asset. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_resource_typesA

Get resource types for given asset run id. Use 'fetch_assets_summary' tool to get assets run id Function accepts page number (page) and page size (pageSize) for pagination. If MCP client host unable to handle large response use page and pageSize. If the request times out retry with pagination, increasing pageSize from 50 to 100.

  1. Call fetch_resource_types with page=1, pageSize=50

  2. Note the totalPages from the response

  3. Continue calling each page until complete

  4. Summarize all results together

Args: - id(str): Asset run id

Returns: - resourceTypes (List[AssetsVo]): A list of resource types. - resourceType (str): Resource type. - totalResources (int): Total number of resources.
- error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_checksA

Get checks for given assets run id and resource type. Use this function to get all checks for given assets run id and resource type Use 'fetch_assets_summary' tool to get asset run id Use 'fetch_resource_types' tool to get all resource types Function accepts page number (page) and page size (pageSize) for pagination. If MCP client host unable to handle large response use page and pageSize. If the request times out retry with pagination, increasing pageSize from 5 to 10.

If the check data set is large to fetch efficiently or results in timeouts, it is recommended to use the 'summary tool' instead to get a summarized view of the checks.

  1. Call fetch_checks with page=1, pageSize=10

  2. Note the totalPages from the response

  3. Continue calling each page until complete

  4. Summarize all results together

Args: - id (str): Asset run id - resourceType (str): Resource type - complianceStatus (str): Compliance status

Returns: - checks (List[CheckVO]): A list of checks. - name (str): Name of the check. - description (str): Description of the check. - rule (RuleVO): Rule associated with the check. - type (str): Type of the rule. - name (str): Name of the rule. - activationStatus (str): Activation status of the check. - priority (str): Priority level of the check. - complianceStatus (str): Compliance status of the check. - compliancePCT (float): Compliance percentage. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_resourcesA

Get resources for given asset run id and resource type Function accepts page number (page) and page size (pageSize) for pagination. If MCP client host unable to handle large response use page and pageSize, default page is 1 If the request times out retry with pagination, increasing pageSize from 5 to 10.

If the resource data set is large to fetch efficiently or results in timeouts, it is recommended to use the 'summary tool' instead to get a summarized view of the resource.

  1. Call fetch_resources with page=1, pageSize=10

  2. Note the totalPages from the response

  3. Continue calling each page until complete

  4. Summarize all results together

Args: - id (str): Asset run id - resourceType (str): Resource type - complianceStatus (str): Compliance status

Returns: - resources (List[ResourceVO]): A list of resources. - name (str): Name of the resource. - resourceType (str): Type of the resource. - complianceStatus (str): Compliance status of the resource. - checks (List[ResourceCheckVO]): List of checks associated with the resource. - name (str): Name of the check. - description (str): Description of the check. - rule (RuleVO): Rule applied in the check. - type (str): Type of the rule. - name (str): Name of the rule. - activationStatus (str): Activation status of the check. - priority (str): Priority level of the check. - controlName (str): Name of the control. - complianceStatus (str): Compliance status specific to the resource. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_resources_by_check_nameA

Get resources for given asset run id, and check name. Function accepts page number (page) and page size (pageSize) for pagination. If MCP client host unable to handle large response use page and pageSize. If the request times out retry with pagination, increasing pageSize from 10 to 50.

If the resource data set is large to fetch efficiently or results in timeouts, it is recommended to use the 'summary tool' instead to get a summarized view of the resource.

  1. Call fetch_resources_for_check with page=1, pageSize=10

  2. Note the totalPages from the response

  3. Continue calling each page until complete

  4. Summarize all results together

Args: - id: Asset run id. - checkName: Check name.

Returns: - resources (List[ResourceVO]): A list of resources. - name (str): Name of the resource. - resourceType (str): Type of the resource. - complianceStatus (str): Compliance status of the resource. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_checks_summaryA

Use this to get the summary on checks Use this when total items in 'fetch_checks' is high Get checks summary for given asset run id and resource type. Get a summarized view of resources based on - Compliance breakdown for checks - Total Checks available - Total compliant checks - Total non-compliant checks

Args: - id (str): Asset run id - resourceType (str): Resource type

Returns: - complianceSummary (dict): Summary of compliance status across checks. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_resources_summaryA
Use this to get the summary on resource 
Use this when total items in 'fetch_resources' is high
Fetch a summary of resources for a given asset run id and resource type.
Get a summarized view of resources include
    - Compliance breakdown for resource
        - Total Resources available
        - Total compliant resources
        - Total non-compliant resources

Args: - id (str): asset run ID - resourceType (str): Resource type

Returns: - complianceSummary (dict): Summary of compliance status across checks. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_resources_by_check_name_summaryB

Use this to get the summary on check resources Use this when total items in 'fetch_resources_for_check' is high Get check resources summary for given asset run id, resource type and check Paginated data is enough for summary Get a summarized view of check resources based on - Compliance breakdown for resources - Total Resources available - Total compliant resources - Total non-compliant resources

Args: - id (str): Asset run id - resourceType (str): Resource type

Returns: - complianceSummary (dict): Summary of compliance status across checks. - error (Optional[str]): An error message if any issues occurred during retrieval.

get_dashboard_review_periodsB

Fetch list of review periods Returns: - items (List[str]): list of review periods - error (Optional[str]): An error message if any issues occurred during retrieval.

get_dashboard_dataB

Function accepts compliance period as 'period'. Period denotes for which quarter of year dashboard data is needed. Format: Q1 2024.

Dashboard contains summary data of Common Control Framework (CCF). For any related to contorl category, framework, assignment status use this function. This contains details of control status such as 'Completed', 'In Progress', 'Overdue', 'Pending'. The summarization levels are 'overall control status', 'control category wise', 'control framework wise', 'overall control status' can be fetched from 'controlStatus' 'control category wise' can be fetched from 'controlSummary' 'control framework wise' can be fetched from 'frameworks'

Args: - period (str) - Period denotes for which quarter of year dashboard data is needed. Format: Q1 2024.

Returns: - totalControls (int): Total number of controls in the dashboard. - controlStatus (List[ComplianceStatusSummaryVO]): Summary of control statuses. - status (str): Compliance status of the control. - count (int): Number of controls with the given status. - controlAssignmentStatus (List[ControlAssignmentStatusVO]): Assignment status categorized by control. - categoryName (str): Name of the control category. - controlStatus (List[ComplianceStatusSummaryVO]): Status summary within the category. - status (str): Compliance status. - count (int): Number of controls with this status. - compliancePCT (float): Overall compliance percentage across all controls. - controlSummary (List[ControlSummaryVO]): Detailed summary of each control. - category (str): Category name of the control. - status (str): Compliance status of the control. - dueDate (str): Due date for the control, if applicable. - compliancePCT (float): Compliance percentage for the control. - leafControls (int): Number of leaf-level controls in the category. - complianceStatusSummary (List[ComplianceStatusSummaryVO]): Summary of control statuses. - status (str): Compliance status. - count (int): Number of controls with the given status. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_dashboard_framework_controlsA

Function Overview: Retrieve Control Details for a Given CCF and Review Period

This function retrieves detailed control-level data for a specified Common Control Framework (CCF) during a specific review period.

Args:

  • review_period: The compliance period (typically a quarter) for which the control-level data is requested.
    Format: "Q1 2024"

  • framework_name:
    The name of the Common Control Framework to fetch data for.

Purpose

This function is used to fetch a list of controls and their associated data for a specific CCF and review period.
It does not return an aggregated overview β€” instead, it retrieves detailed, item-level data for each control via an API call.

The results are displayed in the MCP host with client-side pagination, allowing users to navigate through the control list efficiently without making repeated API calls.

Returns: - controls (List[FramworkControlVO]): A list of framework controls. - name (str): Name of the control. - assignedTo (str): Email ID of the user the control is assigned to. - assignmentStatus (str): Status of the control assignment. - complianceStatus (str): Compliance status of the control. - dueDate (str): Due date for completing the control. - score (float): Score assigned to the control. - priority (str): Priority level of the control. - page (int): Current page number in the overall result set. - totalPage (int): Total number of pages. - totalItems (int): Total number of items. - error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_dashboard_framework_summaryA

Function Overview: CCF Dashboard Summary Retrieval

This function returns a summary dashboard for a specified compliance period and Common Control Framework (CCF). It is designed to provide a high-level view of control statuses within a given framework and period, making it useful for compliance tracking, reporting, and audits.

Args:

  • period:
    The compliance quarter for which the dashboard data is requested.
    Format: "Q1 2024"

  • framework_name:
    The name of the Common Control Framework whose data is to be retrieved.

Dashboard Overview

The dashboard provides a consolidated view of all controls under the specified framework and period. It includes key information such as assignment status, compliance progress, due dates, and risk scoring to help stakeholders monitor and manage compliance posture.

Returns: - controls (List[FramworkControlVO]): A list of framework controls. - name (str): Name of the control. - assignedTo (str): Email ID of the user the control is assigned to. - assignmentStatus (str): Status of the control assignment. - complianceStatus (str): Compliance status of the control. - dueDate (str): Due date for completing the control. - score (float): Score assigned to the control. - priority (str): Priority level of the control. - page (int): Current page number in the overall result set. - totalPage (int): Total number of pages. - totalItems (int): Total number of items. - error (Optional[str]): An error message if any issues occurred during retrieval.

get_dashboard_common_controls_detailsA

Function accepts compliance period as 'period'. Period donates for which quarter of year dashboard data is needed. Format: Q1 2024. Use this tool to get Common Control Framework (CCF) dashboard data for a specific compliance period with filters. This function provides detailed information about common controls, including their compliance status, control status, and priority. Use pagination if controls count is more than 50 then use page and pageSize to get control data pagewise, Once 1st page is fetched,then more pages available suggest to get next page data then increase page number. Args: - period (str): Compliance period for which dashboard data is needed. Format: 'Q1 2024'. (Required) - complianceStatus (str): Compliance status filter (Optional, possible values: 'COMPLIANT', 'NON_COMPLIANT', 'NOT_DETERMINED"). Default is empty string (fetch all Compliance statuses). - controlStatus (str): Control status filter (Optional, possible values: 'Pending', 'InProgress', 'Completed', 'Unassigned', 'Overdue'). Default is empty string (fetch all statuses). - priority (str): Priority of the controls. (Optional, possible values: 'High', 'Medium', 'Low'). Default is empty string (fetch all priorities). - controlCategoryName (str): Control category name filter (Optional). Default is empty string (fetch all categories). - page (int): Page number for pagination (Optional). Default is 1 (fetch first page). - pageSize (int): Number of items per page (Optional). Default is 50.

Returns: - controls (List[CommonControlVO]): A list of common controls. - id (str): Unique identifier of the control. - planInstanceID (str): ID of the associated plan instance. - alias (str): Alias or alternate name for the control. - displayable (str): Flag or content that indicates display eligibility. - controlName (str): Name of the control. - dueDate (str): Due date assigned to the control. - score (float): Score assigned to the control. - priority (str): Priority level of the control. - status (str): Current status of the control. - complianceStatus (str): Compliance status of the control. - updatedAt (str): Timestamp when the control was last updated. - page (int): Current page number in the paginated result. - totalPage (int): Total number of pages available. - totalItems (int): Total number of control items. - error (Optional[str]): An error message if any issues occurred during retrieval.

get_top_over_due_controls_detailC

Fetch controls with top over due (over-due) Function accepts count as 'count' Function accepts compliance period as 'period'. Period donates for which quarter of year dashboard data is needed. Format: Q1 2024.

Args: - period (str, required) - Compliance period - count (int, required) - page content size, defaults to 10

Returns: - controls (List[OverdueControlVO]): A list of overdue controls. - name (str): Name of the control. - assignedTo (List[UserVO]): List of users assigned to the control. - emailid (str): Email ID of the assigned user. - assignmentStatus (str): Assignment status of the control. - complianceStatus (str): Compliance status of the control. - dueDate (str): Due date for the control. - score (float): Score assigned to the control. - priority (str): Priority level of the control. - error (Optional[str]): An error message if any issues occurred during retrieval.

get_top_non_compliant_controls_detailB

Function overview: Fetch control with low compliant score or non compliant controls. Arguments:

  1. period: Compliance period which denotes quarter of the year whose dashboard data is needed. By default: Q1 2024.

  2. count:

  3. page: If the user asks of next page use smartly decide the page.

Returns:

  • controls (List[NonCompliantControlVO]): A list of non-compliant controls.

    • name (str): Name of the control.

    • lastAssignedTo (List[UserVO]): List of users to whom the control was last assigned.

      • emailid (str): Email ID of the assigned user.

    • score (float): Score assigned to the control.

    • priority (str): Priority level of the control.

  • error (Optional[str]): An error message if any issues occurred during retrieval.

helpA

Important: This tool should execute when user asks for help or guidance on using ComplianceCow functions. ComplianceCow Help Tool - Provides guidance on how to use ComplianceCow functions.

Args: category: Help category to display. Options: - "all": Show all available help - "assessments": Assessment-related functions - "controls": Control-related functions - "evidence": Evidence-related functions - "dashboard": Dashboard and reporting functions - "assets": Asset management functions - "actions": Action execution functions - "queries": Database query functions

Returns: Formatted help text for the specified category

create_support_ticketA

PURPOSE:

  • Create structured support tickets only after strict user review and explicit approval of all descriptions.

  • Ticket creation MUST NOT occur without explicit user confirmation at every required step.

  • Reduce user input errors and rework by ensuring clarity and completeness before ticket submission.

MANDATORY CONDITIONS β€” NO STEP MAY BE SKIPPED OR BYPASSED:

  1. BEFORE TOOL ENTRY:

  • The tool MUST generate a detailed, pre-filled plain-text description for the task or workflow.

  • The user MUST review this description carefully.

  • Ticket creation MUST be blocked until the user explicitly APPROVES this description.

  1. USER VERIFICATION:

  • The user MUST be presented with the full pre-filled description.

  • The user MUST either confirm its correctness or provide feedback for changes.

  • The tool MUST update the description and priority per feedback and repeat this verification step as many times as needed.

  • Skipping or auto-approving this step is strictly prohibited.

  1. FINAL APPROVAL & FORMATTING:

  • After user approval of the plain text, the description MUST be converted into professional HTML format (bold headings, clear structure, spacing).

  • The user MUST explicitly approve this final HTML-formatted description.

  • The tool MUST block ticket creation until this final approval is given.

  • Only the fully user-approved, HTML-formatted description MAY be used to create the support ticket.

IMPORTANT:
Under no circumstances shall the tool proceed to ticket creation without explicit user approval at all mandatory steps.
The process must strictly enforce these approvals, preventing any premature or automatic ticket submissions.

MANDATORY USER INPUTS:

  • subject (str) β€” ticket title.

  • description (str) β€” final user-approved, HTML-formatted description.

  • priority (str) β€” ticket priority level.
    Valid values: "High", "Medium", "Low" (case-sensitive).
    The user MUST provide one of these values to proceed.

RETURNS:

  • A dictionary simulating the ticket creation response for integration or testing purposes.

get_applications_for_tagA

Get available applications for a specific app tag.

APPLICATION RETRIEVAL:

  • Fetches all existing applications configured for the specified app tag.

  • Returns a list of applications with ID, name, and app type.

  • Used during rule execution to present application choices to the user.

  • Optionally filters by additional tags (e.g., purpose, sourceSystem) for precise matching.

Args: tag_name (str): The app tag name to get applications for. This parameter is mandatory and must not be empty. additional_tags (Dict[str, List[str]]): Optional additional tags to filter applications. Example: {"purpose": ["source-repo"]} to find apps with specific purpose.

Returns: dict: A dictionary containing available applications for the specified tag.

Raises: ValueError: If tag_name is not provided or is empty.

attach_rule_to_controlA

Attach a rule to a specific control in an assessment.

🚨 CRITICAL EXECUTION BLOCKERS β€” DO NOT SKIP 🚨 Before any part of this tool can run, five preconditions MUST be met:

  1. Control Verification:

  • You MUST verify the control exists in the assessment by calling verify_control_in_assessment().

  • Verification must confirm the control is present, valid, and a leaf control.

  • If verification fails β†’ STOP immediately. Do not proceed.

  1. Rule ID Resolution:

  • If rule_id is a valid UUID β†’ proceed.

  • If rule_id is an alphabetic string β†’ treat it as the rule name and resolve it to a UUID using fetch_cc_rule_by_name().

  • If resolution fails or rule_id is still not a UUID after this step β†’ STOP immediately.

  • Execution is STRICTLY PROHIBITED with a plain name.

  1. Rule Publish Validation:

  • You MUST check if the rule is published in ComplianceCow before proceeding.

  • If the rule is not published β†’ STOP immediately.

  • Published status is a hard requirement for attachment.

  1. Evidence Creation Acknowledgment:

  • Before proceeding, you MUST request confirmation from the user about create_evidence.

  • Ask: "Do you want to auto-generate evidence from the rule output? (default: True)"

  • Only proceed after the user explicitly acknowledges their choice.

  1. Override Acknowledgment:

  • If the control already has a rule attached, you MUST request user confirmation before overriding.

  • Ask: "This control already has a rule attached. Do you want to override it? (yes/no)"

  • Only proceed if the user explicitly confirms.

RULE ATTACHMENT WORKFLOW:

  1. Perform control verification using verify_control_in_assessment() (MANDATORY).

  2. Resolve rule_id using the CRITICAL EXECUTION BLOCKERS above (use fetch_cc_rule_by_name() when needed).

  3. Validate that the rule is published in ComplianceCow.

  4. Confirm evidence creation preference from the user (acknowledgment REQUIRED).

  5. Check for existing rule attachments and request override acknowledgment if needed.

  6. Attach rule to control.

  7. Optionally create evidence for the control.

ATTACHMENT OPTIONS:

  • create_evidence: Whether to create evidence along with rule attachment. Must be confirmed by the user before proceeding.

VALIDATION REQUIREMENTS:

  • Control must be verified and confirmed as a leaf control.

  • Rule must be published.

  • Rule ID must be a valid UUID.

  • Assessment and control must exist.

  • User must acknowledge override before replacing an existing rule.

Args: rule_id: ID of the rule to attach (UUID). If an alphabetic string is provided, it MUST be resolved to a UUID using fetch_cc_rule_by_name() before the tool proceeds. assessment_name: Name of the assessment. control_id: ID of the control. create_evidence: Whether to create auto-generated evidence from the rule output (default: True). ⚠️ MUST be confirmed by user acknowledgment before execution.

Returns: Dict containing attachment status and details.

fetch_cc_rule_by_idB

Fetch rule details by rule id from the compliancecow.

Args: rule_id: Rule Id of the rule to retrieve

Returns: Dict containing complete rule structure and metadata

fetch_cc_rule_by_nameB

Fetch rule details by rule name from the compliancecow.

Args: rule_name: Rule name of the rule to retrieve

Returns: Dict containing complete rule structure and metadata

publish_ruleC

Publish a rule to make it available for ComplianceCow system.

CRITICAL WORKFLOW RULES:

  • MANDATORY: Check rule status to ensure rule is fully developed before publishing

  • MUST FOLLOW THESE STEPS EXACTLY

  • DO NOT ASSUME OR SKIP ANY STEPS

  • APPLICATIONS FIRST, THEN RULE

  • WAIT FOR USER AT EACH STEP

  • NO SHORTCUTS OR BYPASSING ALLOWED

RULE PUBLISHING HANDLING:

WHEN TO USE:

  • After successful rule creation

  • User wants to make rule available for others

  • Rule has been tested and validated

WORKFLOW (step-by-step with user confirmation):

  1. Fetch applications and check status

  • Call fetch_applications() to get available applications

  • Extract appTypes from ALL tasks in rule spec.tasks[].appTags.appType - MUST TAKE ALL THE TASKS APPTYPE AND REMOVE DUPLICATES - CRITICAL: DO NOT SKIP ANY TASK APPTYPES

  • Match ALL task appTypes with applications app_type to get application_class_name

  • Call check_applications_publish_status() for ALL matched applications

  1. Present consolidated applications with meaningful format Applications for your rule: [1] App Name | Type: xyz | Status: Published | Action: Republish [2] App Name | Type: abc | Status: Not Published | Action: Publish

Select applications to publish: ___

  • MANDATORY: WAIT for user selection before proceeding to next step

  • DO NOT CONTINUE without explicit user input

  • BLOCK execution until user provides selection

  • STOP HERE: Cannot proceed to step 3 without user response

  • HALT WORKFLOW: Wait for user to select application numbers

  • NEVER SKIP THIS STEP: User must select applications first

  • ALWAYS ASK FOR SELECTION EVEN IF ALL APPLICATIONS ARE PUBLISHED

  1. Publish selected applications (BLOCKED until step 2 complete)

  • ENTRY REQUIREMENT: User selection from step 2 must be provided

  • PREREQUISITE CHECK: Verify user provided application numbers

  • CANNOT EXECUTE: Without completing step 2 user selection

  • Get user selection numbers

  • Call publish_application() for selected applications only

  • Inform user whether successfully published or not

  • CHECKPOINT: All applications must be published before rule steps

  1. Check rule publication status (APPLICATIONS MUST BE COMPLETE FIRST)

  • GATE KEEPER: Cannot proceed without application publishing completion

  • MANDATORY PREREQUISITE: All application steps finished

  • BLOCKED ACCESS: No rule operations until applications handled

  • Call check_rule_publish_status()

  • Check response valid field:

    • True = Already published

    • False = Not published

  1. Handle rule publishing based on status If valid=False (not published):

  • Show: "Rule is not published. Do you want to publish it? (yes/no)"

  • If yes: Proceed with publishing using current name

If valid=True (already published):

  • Show: "Rule is already published. Choose option:"

    • [1] Republish with same name

    • [2] Publish with another name

  • Get user choice

  1. Handle alternative name logic If "another name" chosen:

    1. Ask: "Enter new rule name: ___"

    2. Call check_rule_publish_status(new_name)

    3. If name exists: "Name already exists. Choose option:"

      • [1] Use same name (republish)

      • [2] Enter another name

    4. If name available: Proceed with new name

    5. Keep checking until user chooses available name or decides to republish existing

  2. Final publication

  • Call publish_rule() with confirmed name

  • Inform user: "Published successfully" or "Publication failed"

  1. Rule Association:

    • Publishes the rule to make it available for control attachment

    • Ask user: "Do you want to attach this rule to a ComplianceCow control? (yes/no)"

    • If yes: Proceed to associate the rule with control and request assessment name and control alias from the user

    • If no: End workflow

EXECUTION CONTROL MECHANISMS:

  • STEP GATE: Each step requires completion before next

  • USER GATE: Each step requires user input/confirmation

  • EXECUTION BLOCKER: No tool calls without user response

  • WORKFLOW ENFORCER: Steps cannot be skipped or assumed

  • SEQUENTIAL LOCK: Must complete in exact order

Args: rule_name: Name of the rule to publish cc_rule_name: Optional alternative name for publishing

Returns: Dict with publication status and details

fetch_assessmentsA

Fetch the list of available assessments in ComplianceCow.

TOOL PURPOSE:

  • Retrieves a list of available assessments if no specific match is provided.

  • Returns only basic assessment info (id, name, category) without the full control hierarchy.

  • Used to confirm the assessment name while attaching a rule to a specific control.

Args: categoryId (Optional[str]): Assessment category ID.
categoryName (Optional[str]): Assessment category name.
assessmentName (Optional[str]): Assessment name.

Returns: - assessments (List[Assessments]): A list of assessment objects, each containing:
- id (str): Unique identifier of the assessment.
- name (str): Name of the assessment.
- category_name (str): Name of the category.
- error (Optional[str]): An error message if any issues occurred during retrieval.

fetch_leaf_controls_of_an_assessmentA

To fetch the only the leaf controls for a given assessment. If assessment_id is not provided use other tools to get the assessment and its id.

Args: - assessment_id (str, required): Assessment id or plan id.

Returns: - controls (List[AutomatedControlVO]): List of controls - id (str): Control ID. - displayable (str): Displayable name or label. - alias (str): Alias of the control. - activationStatus (str): Activation status. - ruleName (str): Associated rule name. - assessmentId (str): Assessment identifier. - error (Optional[str]): An error message if any issues occurred during retrieval.

verify_control_in_assessmentA

Verify the existence of a specific control by alias within an assessment and confirm it is a leaf control.

CONTROL VERIFICATION AND VALIDATION:

  • Confirms the control with the specified alias exists in the given assessment.

  • Validates that the control is a leaf control (eligible for rule attachment).

  • Checks if a rule is already attached to the control.

  • Returns control details and attachment status.

LEAF CONTROL IDENTIFICATION:

  • A control is considered a leaf control if:

  • leafControl = true, OR

  • has no planControls array, OR

  • planControls array is empty.

  • Only leaf controls can have rules attached.

  • If the control is not a leaf control, an error will be returned.

Args: assessment_name: Name of the assessment. control_alias: Alias of the control to verify.

Returns: Dict containing control details, leaf status, and rule attachment info.

check_applications_publish_statusB

Check publication status for each application in the provided list.

app_info structure is [{"name":["ACTUAL application_class_name"]}]

Args: app_info: List of application objects to check

Returns: Dict with publication status for each application. Each app will have 'published' field: True if published, False if not.

check_rule_publish_statusC

Check if a rule is already published.

  • If not published β†’ publish the rule so it becomes available for control attachment

  • Once published, prompt the user:
    "Do you want to attach this rule to a ComplianceCow control? (yes/no)"

  • If yes β†’ ask for assessment name and control alias to proceed with association

  • If no β†’ end workflow

Args: rule_name: Name of the rule to check

Returns: Dict with publication status and details

publish_applicationC

Publish applications to make them available for rule execution.

Args: rule_name: Name of the rule these applications belong to app_info: List of application objects to publish

Returns: Dict with publication results for each application

list_checksB

Retrieve all checks associated with an asset.

Args: - assetId (str): Asset id (plan id).

Returns: - success (bool): Indicates if the operation completed successfully. - checks (List[dict]): A list of checks. - id (str): Check id. - name (str): Name of the check. - error (Optional[str]): An error message if any issues occurred during retrieval.

get_asset_control_hierarchyA

Retrieve the complete control hierarchy for an asset with nested plan controls. Returns only id and name for each control while preserving the full hierarchical structure.

Args: - assetId (str): Asset id.

Returns: - success (bool): Indicates if the operation completed successfully. - planControls (List[dict]): Nested hierarchy of controls with only id and name. Each control contains: - id (str): Control id. - name (str): Name of the control. - planControls (List[dict]): Nested child controls ( - error (Optional[str]): An error message if any issues occurred during retrieval.

add_check_to_assetB

Add a new control and a new check to an asset under a specified parent control. The check will be attached to newly created control beneath the parent control.

Args: - assetId (str): Asset id. - parentControlId (str): Parent control id under which the check will be added. - checkName (str): Name of the check to be added. - checkDescription (str): Description of the check to be added.

Returns: - success (bool): Indicates if the check was added successfully. - error (Optional[str]): An error message if any issues occurred during the addition.

create_asset_and_checkA

Create a new asse with an initial control and check structure. The asset will be created with a hierarchical structure: asset -> parentcontrol -> control -> check.

Args: - assetName (str): Name of the asset to be created. - controlName (str): Name of the initial control to be created within the asset. - checkName (str): Name of the initial check to be created under the control. (letters and numbers only, no spaces) - checkDescription (str): Description of the initial check.

Returns: - success (bool): Indicates if the asset was created successfully. - assetId (str): ID of the created asset (only present if successful). - error (Optional[str]): An error message if any issues occurred during creation.

schedule_asset_executionA

Schedule automated execution for a asset.

IMPORTANT WORKFLOW & SAFETY RULES:

  • User inputs (runPrefixName, cronTab) are mandatory and cannot be bypassed or assumed.

  • The cronTab string MUST be constructed explicitly from the user's schedule instructions (e.g., frequency, time-of-day, timezone). Never auto-generate it without user confirmation.

  • controlPeriod MUST be one of the supported values.

  • controlDuration MUST be a positive integer provided by the user. Args:

    • assetId (str): Id of the asset to be scheduled.

    • runPrefixName (str): Human-readable name/prefix for this scheduled run.

    • description (str): Description for the scheduled run.

    • cronTab (str): Full cron expression including timezone (e.g. TZ=Asia/Calcutta 0 0 * * *), explicitly provided/confirmed by the user. Must not be assumed or defaulted.

    • controlPeriod (str): Control period for the assessment run, type selected by the user. Allowed values: - DAY β†’ Last few days - WEEK β†’ Last few weeks - MONTH β†’ Last few months - CAL_WEEK β†’ Last few calendar weeks - CAL_MONTH β†’ Last few calendar months

    • controlDuration (int): Duration count for the selected control period Returns:

    • success (bool): Indicates if the schedule was created successfully.

    • scheduleId (str): ID of the created schedule (only present if successful).

    • error (Optional[str]): An error message if any issues occurred during creation.

list_asset_schedulesC

List schedules for a given asset.

Args: - assetId (str): Asset ID whose schedules need to be listed

Returns: - success (bool) - items (list): List of schedules - error (Optional[str])

delete_asset_scheduleB

Delete an existing assessment schedule.

Args: - scheduleId (str): ID of the schedule to delete

Returns: - success (bool) - error (Optional[str])

suggest_control_config_citationsA

Suggest control citations for a given control name or description.

WORKFLOW: When user provides a requirement, ask which assessment they want to use. Get assessment name from user, then resolve to assessmentId (mandatory). For control: offer two options - select from existing control on selected assessment OR create new control. If selecting existing control, get control name from user and resolve to controlId. If creating new control, controlId will be empty.

This function provides suggestions for control citations based on control names or descriptions. The user can select from the suggested controls to attach citations to their assessment controls.

Args: controlName (str): Name of control to get suggestions for (required). assessmentId (str): Assessment ID - resolved from assessment name (required). description (str, optional): Description of the control to get suggestions for. controlId (str, optional): Control ID - resolved from control name if selecting existing control, empty if creating new control.

Returns: Dict with success status and suggestions: - success (bool): Whether the request was successful - items (List[dict]): List of suggestion items, each containing: - inputControlName (str): The input control name - controlId (str): The control ID (empty if control doesn't exist yet) - suggestions (List[dict]): List of suggested controls, each containing: - Name (str): Control name - Control ID (int): Control ID number - Control Classification (str): Classification type - Impact Zone (str): Impact zone category - Control Requirement (str): Requirement level - Sort ID (str): Sort identifier - Control Type (str): Type of control - Score (float): Similarity score - authorityDocument (str): Name of the authorityDocument - error (str, optional): Error message if request failed

add_citation_to_asset_controlC

Create a new asse with an initial control and check structure. The asset will be created with a hierarchical structure: asset -> control -> check.

Args: - assetControlId (str): Id of the control in asset. - authorityDocument (str): Authority document name of the citation. - authorityDocumentControlId (str): Id of the control in authority document.

Returns: - success (bool): Indicates if the citation was created successfully. - error (Optional[str]): An error message if any issues occurred during creation.

verify_control_automationB

Verify if a control is automated or not based on the presence of ruleId. If ruleId exists, fetch and return basic rule information.

Args: control_id: The ID of the control to verify

Returns: Dictionary containing automation status and rule details if automated

fetch_cc_rules_listA

Fetch list of CC rules with only name, description, and id. This tool should ONLY be used for attaching rules to control flows.

Args: params: Optional query parameters for filtering/pagination - name_contains: Filter rules by name containing this string - page_size: Number of items to be returned (default 100)

Returns: List of simplified rule objects containing only name, description, and id

create_control_noteA

Create a documentation note on a control.

This tool creates a markdown documentation note that is attached to a control.

βœ… CONFIRMATION-BASED SAFETY FLOW

  • When confirm=False: β†’ The tool returns a PREVIEW of the generated markdown note. β†’ The user may edit the note before confirming.

  • When confirm=True: β†’ The note is permanently created and attached to the control.

Args: controlId (str): The control ID where the note will be attached (required). assessmentId (str): The assessment ID or asset ID that contains the control (required). notes (str): The documentation content in MARKDOWN format (required). topic (str, optional): Topic or subject of the note. confirm (bool, optional):
- False β†’ Preview only (default, no persistence) - True β†’ Create and permanently attach the note

Returns: Dict with success status and note data: - success (bool): Whether the request was successful - note (dict, optional): Created note object containing: - id (str): Note ID - topic (str): Note topic - notes (str): Note content in markdown format - controlId (str): Control ID the note is attached to - assessmentId (str): Assessment ID - error (str, optional): Error message if request failed - next_action (str, optional): Recommended next action

list_control_notesB

List all notes for a given control.

This tool retrieves all notes associated with a control.

Args: controlId (str): The control ID to list notes for (required).

Returns: Dict with success status and notes: - success (bool): Whether the request was successful - notes (List[dict]): List of note objects, each containing: - id (str): Note ID - topic (str): Note topic - notes (str): Note content - totalCount (int): Total number of notes found - error (str, optional): Error message if request failed

update_control_config_noteA

Update an existing documentation note on a control.

βœ… PURPOSE This tool updates an existing note that was previously created on a control. It allows modification of the note content, topic, or both.

βœ… CONFIRMATION-BASED SAFETY FLOW

  • When confirm=False: β†’ The tool returns a PREVIEW of the updated markdown note. β†’ The user may edit the note before confirming.

  • When confirm=True: β†’ The note is permanently updated and saved.

Args: controlId (str): The control ID where the note exists (required). noteId (str): The note ID to update (required). assessmentId (str): The assessment ID or asset ID that contains the control (required). notes (str): The updated documentation content in MARKDOWN format (required). topic (str, optional): Updated topic or subject of the note. confirm (bool, optional):
- False β†’ Preview only (default, no persistence) - True β†’ Update and permanently save the note

Returns: Dict with success status and note data: - success (bool): Whether the request was successful - message (str, optional): Success or error message - noteId (str, optional): Updated note ID - error (str, optional): Error message if request failed

get_tasks_summaryB

Resource containing minimal task information for initial selection.

This tool is also used as a fallback resource when fetch_tasks_suggestions is disabled or does not return suitable matches, ensuring the user always has access to a broader list of available tasks for manual selection.

This resource provides only the essential information needed for task selection:

  • Task name and display name

  • Brief description

  • Purpose and capabilities

  • Tags for categorization

  • Inputs/Outputs params with minimal details

  • Basic README summary

Use this for initial task discovery and selection. Detailed information can be retrieved later using tasks://details/{task_name} for selected tasks only.

AUTOMATIC OUTPUT ANALYSIS BY INTENTION:

  • MANDATORY: Analyze each task's output purpose and completion level during selection

  • IDENTIFY output intentions that require follow-up processing:

    • SPLITTING INTENTION: Outputs that divide data into separate categories β†’ REQUIRE consolidation

    • EXTRACTION INTENTION: Outputs that pull raw data without formatting β†’ REQUIRE transformation

    • VALIDATION INTENTION: Outputs that check compliance without final reporting β†’ REQUIRE analysis/reporting

    • PROCESSING INTENTION: Outputs that transform data but don't create final deliverables β†’ REQUIRE finalization

OUTPUT COMPLETION ASSESSMENT:

  • EVALUATE: Does this output serve as a final deliverable for end users?

  • ASSESS: Is this output consumable without additional processing?

  • DETERMINE: Does this output require combination with other outputs to be meaningful?

  • IDENTIFY: Is this output an intermediate step in a larger workflow?

WORKFLOW COMPLETION ENFORCEMENT:

  • NEVER present task selections that end with intermediate processing outputs

  • AUTOMATICALLY suggest tasks that fulfill incomplete intentions

  • ENSURE every workflow produces actionable final deliverables

  • RECOMMEND tasks that bridge gaps between current outputs and user goals

Mandatory functionality:

  • Retrieve a list of task summaries based on the user's request

  • Analyze task outputs and suggest additional tasks for workflow completion

  • If no matching task is found for the requested functionality, prompt user for confirmation

  • Based on user response, either proceed accordingly or create support ticket using create_support_ticket()

get_template_guidanceA

Get detailed guidance for filling out a template-based input.

COMPLETE TEMPLATE HANDLING PROCESS:

STEP 1 - TEMPLATE IDENTIFICATION:

  • Called for inputs that have a templateFile property

  • Provides decoded template content and structure explanation

  • Returns required fields, format-specific tips, and validation rules

PREFILLING PROCESS:

  1. Analyze template structure for external dependencies

  2. Prefill template with realistic values based on the instructions

RELEVANCE FILTERING:

  • ANALYZE task description and user use case to create targeted search queries

  • EXTRACT key terms from rule purpose and task capabilities

  • COMBINE system name with specific functionality being configured

  • PRIORITIZE documentation that matches the exact use case scenario

STEP 3 - ENHANCED TEMPLATE PRESENTATION TO USER: Show the template with this EXACT format: "Now configuring: [X of Y inputs]

Task: {task_name} Input: {input_name} - {description}

You can:

  • Accept these prefilled values (type 'accept')

  • Modify specific sections (provide your modifications)

  • Replace entirely (provide your complete configuration)

Please review and confirm or modify the prefilled configuration:"

STEP 4 - FALLBACK TO ORIGINAL TEMPLATE: If no documentation found or prefilling fails:

  • Show original empty template with standard format

  • Include note: "No documentation found for prefilling. Please provide your configuration."

  • Continue with existing workflow

STEP 5 - COLLECT USER CONTENT:

  • Wait for the user to provide their response (accept/modify/replace)

  • Handle "accept" by using prefilled content

  • Handle modifications by merging with prefilled baseline

  • Handle complete replacement with user content

  • Do NOT proceed until the user provides content

  • NEVER use template content as default values without documentation analysis

STEP 6 - PROCESS TEMPLATE INPUT:

  • Call collect_template_input(task_name, input_name, user_content)

  • Include documentation source metadata

  • Validates content format, checks required fields, uploads file

  • Returns file URL for use in rule structure

TEMPLATE FORMAT HANDLING:

  • JSON: Must be valid JSON with proper brackets and quotes

  • TOML: Must follow TOML syntax with proper sections [section_name]

  • YAML: Must have correct indentation and structure

  • XML: Must be well-formed XML with proper tags

VALIDATION RULES:

  • Format-specific syntax validation

  • Required field presence checking

  • Data type validation where applicable

  • Template structure compliance

  • Documentation standard compliance (when applicable)

CRITICAL TEMPLATE RULES:

  • ALWAYS call get_template_guidance() for inputs with templates

  • ALWAYS analyze documentation before showing template to user

  • ALWAYS show the prefilled template (or original if no docs found) with exact presentation format

  • ALWAYS wait for the user to provide response (accept/modify/replace)

  • ALWAYS call collect_template_input() to process user content

  • NEVER use template content directly - always use documentation-enhanced or user-provided content

  • ALWAYS use returned file URLs in rule structure

PROGRESS TRACKING:

  • Show "Now configuring: [X of Y inputs]" for user progress

  • Include clear task and input identification

  • Provide format-specific guidance and tips

  • Include documentation analysis results and source citations

Args: task_name: Name of the task input_name: Name of the input that has a template

Returns: Dict containing template content, documentation analysis, prefilled values, and guidance

collect_template_inputA

Collect user input for template-based task inputs.

TEMPLATE INPUT PROCESSING (Enhanced with Progressive Saving):

  • Validates user content against template format (JSON/TOML/YAML)

  • Handles JSON arrays and objects properly

  • Checks for required fields from template structure

  • Uploads validated content as file (ONLY for FILE dataType inputs)

  • Returns file URL for use in rule structure

  • MANDATORY: Gets final confirmation for EVERY input before proceeding

  • CRITICAL: Only processes user-provided content, never use default templates

  • NEW: Prepared for automatic rule updates in confirm step

JSON ARRAY HANDLING (Preserved):

  • Properly validates JSON arrays: [{"key": "value"}, {"key": "value"}]

  • Validates JSON objects: {"key": "value", "nested": {"key": "value"}}

  • Handles complex nested structures with arrays and objects

  • Validates each array element and object property

VALIDATION REQUIREMENTS (Preserved):

  • JSON: Must be valid JSON (arrays/objects) with proper brackets and quotes

  • TOML: Must follow TOML syntax with proper sections [section_name]

  • YAML: Must have correct indentation and structure

  • XML: Must be well-formed XML with proper tags

  • Required fields: All template fields must be present in user content

STREAMLINED WORKFLOW:

  1. User provides template content

  2. Validate and process immediately

  3. Auto-proceed if validation passes

FILE NAMING CONVENTION (Preserved):

  • Format: {task_name}_{input_name}.{extension}

  • Extensions: .json, .toml, .yaml, .xml, .txt based on format

WORKFLOW INTEGRATION (Enhanced):

  1. Called after get_template_guidance() shows template to user

  2. User provides their actual configuration content

  3. This tool validates content (including JSON arrays)

  4. Shows content preview and asks for confirmation

  5. Only after confirmation: uploads file or stores in memory

  6. Returns file URL or memory reference for rule structure

  7. NEW: Prepared for rule update in confirm_template_input()

CRITICAL RULES (Preserved):

  • ONLY upload files for inputs with dataType = "FILE" or "HTTP_CONFIG"

  • Template inputs and HTTP_CONFIG inputs are typically file types and need file uploads

  • Store non-FILE template content in memory

  • ALWAYS get final confirmation before proceeding

  • Handle JSON arrays properly: validate each element

  • Never use template defaults - always use user-provided content

MANDATORY: Task-sequential collection only. Sanitize input names (alphanumeric + underscore).

Args: task_name: Name of the task this input belongs to input_name: Name of the input parameter user_content: Content provided by the user based on the template

Returns: Dict containing validation results and file URL or memory reference, prepared for progressive rule updates

confirm_template_inputA

Confirm and process template input after user validation.

CONFIRMATION PROCESSING (Enhanced with Automatic Rule Updates):

  • Handles final confirmation of template content

  • Uploads files for FILE dataType inputs

  • Stores content in memory for non-FILE inputs

  • MANDATORY step before proceeding to next input

  • NEW: Automatically updates the rule with new input after processing

  • Skips confirmation if the user accepts the suggested template

PROCESSING RULES (Enhanced):

  • FILE dataType: Upload content as file, return file URL

  • HTTP_CONFIG dataType: Upload content as file, return file URL

  • Non-FILE dataType: Store content in memory

  • Include metadata about confirmation and timestamp

  • NEW: Automatic rule update with new input data

AUTOMATIC RULE UPDATE PROCESS: After successful input processing, this tool automatically:

  1. Fetches the current rule structure

  2. Adds the new input to spec.inputs

  3. Updates spec.inputsMeta__ with input metadata

  4. Calls create_rule() to save the updated rule

  5. Rule status will be auto-detected (DRAFT β†’ collecting_inputs β†’ READY_FOR_CREATION)

UI DISPLAY REQUIREMENT:

  • The file URL must ALWAYS be displayed to the user in the UI, allowing the user to view or download the file directly.

Args: rule_name: Descriptive name for the rule based on the user's use case. Note: Use the same rule name for all inputs that belong to this rule. Example: rule_name = "MeaningfulRuleName" task_name: Name of the task this input belongs to input_name: Name of the input parameter rule_input_name: Must be one of the values defined in the rule structure's inputs confirmed_content: The content user confirmed

Returns: Dict containing processing results (file URL or memory reference) and rule update status

upload_fileA

Upload file content and return file URL for use in rules.

ENHANCED FILE UPLOAD PROCESS:

  • Automatically detects file format from filename and content

  • Validates and fixes common formatting issues for JSON, YAML, TOML, CSV, XML

  • Accepts JSON arrays in various formats: raw, single-line, multi-line, or escaped (auto-formatted).

  • Normalizes CSV delimiters and whitespace

  • Reformats content with proper indentation/structure

  • No user preview required - validation happens automatically

  • Returns detailed validation results and file URL

SUPPORTED INPUT FORMATS:

  • Raw JSON: {"key": "value"} or [{"key": "value"}]

  • Escaped JSON: "{"key": "value"}"

  • Complex escaped: "[{"repository":"name","owner":"org"}]"

  • Standard strings for other formats (YAML, TOML, CSV, XML)

AUTOMATIC FORMAT PROCESSING:

  • JSON: Detects escaped strings, unescapes, validates syntax, reformats with indentation

  • Raw JSON objects/arrays: Automatically converts to proper JSON string format

  • YAML: Validates structure, reformats with proper indentation

  • TOML: Validates sections and key-value pairs, reformats

  • CSV: Detects delimiter, strips cell whitespace, normalizes format

  • XML: Validates well-formed structure

  • Other formats: Pass through as-is

VALIDATION RESULTS:

  • Returns success/failure status with detailed error messages

  • Provides format-specific validation feedback

  • Indicates if content was automatically reformatted

  • Includes file metadata (size, format, etc.)

Args: rule_name: Descriptive name for the rule (same across all rule inputs) file_name: Name of the file to upload
content: File content (text or base64 encoded) CRITICAL: Must be stringified if JSON content
content_encoding: Encoding of the content (utf-8, base64)

Returns: Dict containing upload results: { success: bool, file_url: str, filename: str, unique_filename: str, file_id: str, file_format: str, content_size: int, validation_status: str, was_formatted: bool, message: str, error: Optional[str] }

collect_parameter_inputA

Collect user input for non-template parameter inputs.

PARAMETER INPUT PROCESSING:

  • Collects primitive data type values (STRING, INT, FLOAT, BOOLEAN, DATE, DATETIME)

  • Stores values in memory (NEVER uploads files for primitive types)

  • Handles optional vs required inputs based on 'required' attribute

  • Supports default value confirmation workflow

  • Validates data types and formats

  • MANDATORY: Gets final confirmation for EVERY input before proceeding

INPUT REQUIREMENT RULES:

  • MANDATORY: Only if input.required = true

  • OPTIONAL: If input.required = false, user can skip or provide value

  • DEFAULT VALUES: If user requests defaults, must get confirmation

  • FINAL CONFIRMATION: Always required before proceeding to next input

DEFAULT VALUE WORKFLOW:

  1. User requests to use default values

  2. Show default value to user for confirmation

  3. "I can fill this with the default value: '[default_value]'. Confirm?"

  4. Only proceed after explicit user confirmation

  5. Store confirmed default value in memory

FINAL CONFIRMATION WORKFLOW (MANDATORY):

  1. After user provides value (or confirms default)

  2. Show final confirmation: "You entered: '[value]'. Is this correct? (yes/no)"

  3. If 'yes': Store value and proceed to next input

  4. If 'no': Allow user to re-enter value

  5. NEVER proceed without final confirmation

DATA TYPE VALIDATION:

  • STRING: Any text value

  • INT: Integer numbers only

  • FLOAT: Decimal numbers

  • BOOLEAN: true/false, yes/no, 1/0

  • DATE: YYYY-MM-DD format

  • DATETIME: ISO 8601 format

COLLECTION PRESENTATION: "Now configuring: [X of Y inputs]

Task: {task_name} Input: {input_name} ({data_type}) Description: {description} Required: {Yes/No} Default: {default_value or 'None'}

Please provide a value, type 'default' to use default, or 'skip' if optional:"

CRITICAL RULES:

  • NEVER upload files for primitive data types

  • Store all primitive values in memory only

  • Always confirm default values with user

  • ALWAYS get final confirmation before proceeding to next input

  • Respect required vs optional based on input.required attribute

  • Validate data types before storing

Args: task_name: Name of the task this input belongs to input_name: Name of the input parameter user_value: Value provided by user (optional) use_default: Whether to use default value (requires confirmation)

Returns: Dict containing parameter value and storage info

confirm_parameter_inputA

Confirm and store parameter input after user validation.

CONFIRMATION PROCESSING (Enhanced with Automatic Rule Updates):

  • Handles final confirmation of parameter values

  • Stores confirmed values in memory

  • Supports both default value confirmation and final value confirmation

  • MANDATORY step before proceeding to next input

  • NEW: Automatically updates rule with parameter if rule_name provided

CONFIRMATION TYPES (Preserved):

  • "default": User confirmed they want to use default value

  • "final": User confirmed their entered value is correct

  • Both types require explicit user confirmation

STORAGE RULES (Enhanced):

  • Store all confirmed values in memory (never upload files)

  • Only store after explicit user confirmation

  • Include metadata about confirmation type and timestamp

  • NEW: Automatic rule update with parameter data

AUTOMATIC RULE UPDATE PROCESS: If rule_name is provided, this tool automatically:

  1. Fetches the current rule structure

  2. Adds the parameter to spec.inputs

  3. Updates spec.inputsMeta__ with parameter metadata

  4. Calls create_rule() to save the updated rule

  5. Rule status will be auto-detected based on completion

Args: task_name: Name of the task this input belongs to input_name: Name of the input parameter rule_input_name: Must be one of the values defined in the rule structure's inputs confirmed_value: The value user confirmed explanation: Add explanation only if dataType is JQ_EXPRESSION or SQL_EXPRESSION. This field provides details about the confirmed_value. confirmation_type: Type of confirmation ("default" or "final") rule_name: Optional rule name for automatic rule updates

Returns: Dict containing stored value confirmation and rule update status

prepare_input_collection_overviewA

INPUT COLLECTION OVERVIEW & RULE CREATION

Prepare and present input collection overview before starting any input collection.

MANDATORY FIRST STEP - INPUT OVERVIEW PROCESS (Enhanced): This tool MUST be called before collecting any inputs. It analyzes all selected tasks and presents a complete overview of what inputs will be needed.

ENHANCED WITH AUTOMATIC RULE CREATION: After user confirms the input overview, this tool automatically creates the initial rule structure with selected tasks. The rule will be saved with DRAFT status and can be progressively updated as inputs are collected.

MANDATORY WORKFLOW ENFORCEMENT - CRITICAL INSTRUCTION:

  • AFTER user confirms the input overview, IMMEDIATELY call create_rule() with initial structure.

  • This call is MANDATORY and CANNOT be skipped or deferred.

  • The initial rule structure MUST be created before any input collection begins.

  • BLOCK all subsequent input collection if initial rule creation fails.

  • NEVER proceed to input collection without successful initial rule creation.

  • If create_rule() fails, STOP workflow and resolve the issue before continuing.

  • The rule creation establishes the foundation for progressive updates during input collection.

ENFORCEMENT STEPS:

  1. Present overview to user

  2. Get user confirmation

  3. IMMEDIATELY call create_rule() with initial structure that MUST INCLUDE inputs and inputsMeta__ sections WITH ACTUAL INPUT DATA - DO NOT LEAVE inputs and inputsMeta__ SECTION EMPTY - INPUTS AND INPUTSMETA__ ARE MANDATORY CORE COMPONENTS THAT MUST CONTAIN THE REQUIRED INPUT MAPPINGS - THIS IS NON-NEGOTIABLE - NO EXCEPTIONS

  4. Verify rule creation success before proceeding

  5. Only then allow input collection to begin

TASK-BY-TASK INPUT COLLECTION & VALIDATION (CRITICAL ENFORCEMENT): ═══════════════════════════════════════════════════════════════════ MANDATORY WORKFLOW FOR EACH TASK:

FOR EACH TASK in selected_tasks: STEP 1: Collect ALL inputs for current task - Use collect_template_input() for file/template inputs - Use collect_parameter_input() for parameter inputs - Wait for the current task inputs to be collected

STEP 2: **MANDATORY EXECUTION** (CANNOT BE SKIPPED)
        β›” THIS STEP CANNOT BE SKIPPED β›”
        - Call execute_task(task_name, collected_inputs_for_this_task, application_config)
        - This MUST happen IMMEDIATELY after all task inputs are collected
        - BLOCK progression if execution fails
        - If execution fails:
          * Show execution errors to user
          * Allow input correction
          * Re-execute with corrected inputs
          * Only proceed when execution succeeds
        - On success:
          * Store the REAL outputs from this task
          * Use these outputs as inputs for dependent tasks
          * Display output files to user

STEP 3: Move to next task ONLY after the task execution succeeds
        - Task Execution success = prerequisite for next task
        - No task can start input collection without previous task execution completing successfully
        - Use the REAL outputs from the executed task as inputs for dependent tasks

❌ PROHIBITED ACTIONS:

  • Collecting inputs for Task N+1 without executing Task N

  • Skipping execution "to save time"

  • Assuming execution will happen "later"

  • Moving to final rule creation without executing all tasks

βœ… CORRECT WORKFLOW: Task1 Inputs β†’ Execute Task1 β†’ Show Results β†’ Task2 Inputs β†’ Execute Task2 β†’ Show Results β†’ Task3 Inputs β†’ Execute Task3 β†’ Show Results β†’ Complete Rule

❌ WRONG WORKFLOW: Task1 Inputs β†’ Task2 Inputs β†’ Task3 Inputs β†’ [Try to execute later]

SELECTIVE INPUT INCLUSION:

  • DO NOT automatically include ALL task inputs in initial rule creation.

  • Only include inputs that are REQUIRED or explicitly needed for the user's use case.

  • Skip optional inputs unless user specifically requests them.

  • Additional inputs can be added later if needed during execution or refinement.

FAILURE HANDLING:

  • If user confirms but create_rule() fails β†’ STOP and fix issue.

  • If user declines β†’ End workflow, no rule creation needed.

  • If create_rule() succeeds β†’ Proceed to task-wise input collection and execution.

  • NEVER skip the create_rule() call after user confirmation.

HANDLES DUPLICATE INPUT NAMES WITH TASK ALIASES (Preserved):

  • Creates unique identifiers for each task-alias-input combination.

  • Format: "{task_alias}.{input_name}" for uniqueness.

  • Prevents conflicts when multiple tasks have same input names or same task used multiple times.

  • Maintains clear mapping between task aliases and their specific inputs.

  • Task aliases should be simple, meaningful step indicators (e.g., "step1", "validation", "processing").

OVERVIEW REQUIREMENTS (Preserved):

  1. Analyze ALL selected tasks with their aliases for input requirements.

  2. Categorize inputs: templates vs parameters.

  3. Create unique identifiers for each task-alias-input combination.

  4. Count total inputs needed.

  5. Present clear overview to user.

  6. Get user confirmation before proceeding.

  7. Return structured overview for systematic collection.

  8. NEW: Automatically create initial rule after user confirmation.

OVERVIEW PRESENTATION FORMAT (Enhanced with Validation):

INPUT COLLECTION OVERVIEW:

I've analyzed your selected tasks. Here's what we need to configure:

TASK 1: [TaskAlias] ([TaskName]) ─────────────────────────────────── Template Inputs: β€’ [InputName] ([Format] file) - [Description] Unique ID: [TaskAlias.InputName]

Parameter Inputs: β€’ [InputName] ([DataType]) - [Description] Unique ID: [TaskAlias.InputName] Required: [Yes/No]

⚠️ EXECUTION CHECKPOINT: After collecting all Task 1 inputs, execute_task() will be called to execute the task with real data before proceeding to Task 2.

TASK 2: [TaskAlias] ([TaskName]) ─────────────────────────────────── [... similar structure ...]

⚠️ EXECUTION CHECKPOINT: After collecting all Task 2 inputs, execute_task() will be called to execute the task with real data before proceeding to Task 3.

SUMMARY:

  • Total inputs needed: X

  • Template files: Y ([formats])

  • Parameter values: Z

  • Estimated time: ~[X] minutes

  • Execution checkpoints: [number of tasks]

WORKFLOW:

  1. For each task in the rule:

    • Collect all required inputs for the task

    • Execute the task with real data using execute_task()

    • Mark the task as executed (βœ“)

    • Store REAL outputs for use by dependent tasks

  2. After all tasks are executed β†’ proceed to final rule completion

Ready to start task-by-task input collection with execution checkpoints?

CRITICAL WORKFLOW RULES:

  • ALWAYS call this tool first before any input collection.

  • NEVER start collecting inputs without user seeing overview.

  • NEVER proceed without user confirmation.

  • Create unique task_alias.input identifiers to avoid conflicts.

  • Show clear task-alias-input relationships to user.

  • NEW: Collect inputs task-by-task and execute each task immediately after collection.

  • NEW: Use REAL outputs from executed tasks as inputs for dependent tasks.

  • NEW: Create initial rule structure after user confirmation.

CRITICAL REQUIREMENTS:

  • Input names: alphanumeric + underscore only (auto-sanitize with re.sub(r'[^a-zA-Z0-9_]', '_', name))

  • Collection order: Complete ALL inputs for each task one by one (Task 1 β†’ execute Task 1 β†’ Task 2 β†’ execute Task 2 β†’ Task 3 β†’ execute Task 3)

  • Within each task: collect all inputs, then execute using 'execute_task()' to get real outputs before proceeding

  • If a task (e.g., Task 2) has input files or other inputs that depend on a previous task, use the REAL output from the executed previous task as the input. Do NOT generate sample data.

  • If the previous task has not been executed yet, execute it first to obtain real outputs.

ARGS:

  • selected_tasks: List of dicts with 'task_name' and 'task_alias' Example: [ {"task_name": "data_validation", "task_alias": "step1"}, {"task_name": "data_processing", "task_alias": "step2"}, {"task_name": "data_validation", "task_alias": "final_check"} ]

Returns: Dict containing structured input overview and collection plan with unique identifiers, plus automatic rule creation capability after user confirmation, with explicit execution checkpoints for each task

verify_collected_inputsA

Verify all collected inputs with user before rule creation.

MANDATORY VERIFICATION STEP (Enhanced):

This tool MUST be called after all inputs are collected but before final rule completion. It presents a comprehensive summary of all collected inputs for user verification.

ENHANCED WITH AUTOMATIC RULE FINALIZATION: After user confirms verification, this tool can automatically finalize the rule by:

  1. Building complete I/O mapping based on task sequence and inputs

  2. Adding mandatory compliance outputs

  3. Setting rule status to ACTIVE

  4. Completing the rule creation process

HANDLES DUPLICATE INPUT NAMES WITH TASK ALIASES (Preserved):

  • Uses unique identifiers (TaskAlias.InputName) for each input

  • Properly maps each unique input to its specific task alias

  • Creates structured inputs for rule creation with unique names when needed

  • Maintains clear separation between inputs from different task instances

VERIFICATION REQUIREMENTS (Preserved):

  1. Show complete summary of ALL collected inputs with unique IDs

  2. Display both template files and parameter values

  3. Show file URLs for uploaded templates

  4. Present clear verification checklist

  5. Get explicit user confirmation

  6. Allow user to modify values if needed

  7. Prepare inputs for rule structure creation with proper task alias mapping

  8. NEW: Automatically finalize rule after user confirmation

VERIFICATION PRESENTATION FORMAT (Preserved): "INPUT VERIFICATION SUMMARY:

Please review all collected inputs before rule creation:

TEMPLATE INPUTS (Uploaded Files): βœ“ Task Input: [TaskAlias.InputName] Task: [TaskAlias] ([TaskName]) β†’ Input: [InputName] Format: [Format] File: [filename] URL: [file_url] Size: [file_size] bytes Status: βœ“ Validated

PARAMETER INPUTS (Values): βœ“ Task Input: [TaskAlias.InputName] Task: [TaskAlias] ([TaskName]) β†’ Input: [InputName] Type: [DataType] Value: [user_value] Required: [Yes/No] Status: βœ“ Set

VERIFICATION CHECKLIST: β–‘ All required inputs collected β–‘ Template files uploaded and validated β–‘ Parameter values set and confirmed β–‘ No missing or invalid inputs β–‘ Ready for rule creation

Are all these inputs correct?

  • Type 'yes' to proceed with rule creation

  • Type 'modify [TaskAlias.InputName]' to change a specific input

  • Type 'cancel' to abort rule creation"

CRITICAL VERIFICATION RULES (Enhanced):

  • NEVER proceed to final rule creation without user verification

  • ALWAYS show complete input summary with unique identifiers

  • ALWAYS get explicit user confirmation

  • Allow input modifications using unique IDs

  • Validate completeness before approval

  • Prepare structured inputs for rule creation with proper task mapping

  • NEW: Automatically finalize rule with I/O mapping after confirmation

Args: collected_inputs: Dict containing all collected template files and parameter values with unique IDs

Returns: Dict containing verification status, user confirmation, and structured inputs for rule finalization

execute_taskA
Execute a specific task with real data after collecting all required inputs.

**This tool executes tasks with REAL data, not sample data.**
If any input depends on a previous task's output and that output is not available,
the dependent task(s) MUST be executed first to obtain the real output.

===============================================================================
EXECUTION CONTEXT
===============================================================================
- This tool MUST be called after collecting the inputs for a task.
- Execution is sequential: execute Task 1 β†’ then Task 2 β†’ etc.
- No task may proceed until its dependent tasks have been executed.
- On execution failure, provide detailed error feedback.

===============================================================================
DEPENDENCY & REAL DATA HANDLING
===============================================================================
If a task requires input from a previous task (dataset, file, or structured output):

1. **Use real task output when available**
    - If the dependent task was already executed and produced outputs:
        β†’ Use those outputs as the input.
        β†’ Do NOT generate synthetic/sample data.
        β†’ Do NOT re-run the previous task unnecessarily.

2. **If required previous task output does NOT exist**
    - The assistant MUST:
        - Explain *why* execution of the previous task is required.
        - Automatically execute the previous task (and any required tasks in the chain).
        - **After execution, display all execution results and outputs.**
        - NO user confirmation should be requestedβ€”only explanation.
        - Use the REAL output from the executed task as input.

3. **If executing a required previous task fails**
    - The assistant MUST:
        - Explain clearly why the task failed.
        - Ask the user to provide the required input data manually.
    - User-provided data becomes the fallback input.

4. **Only execute what is needed**
    - Execute ONLY the minimal set of tasks whose outputs are required.
    - **Every executed task must have its results shown to the user immediately.**

===============================================================================
APPLICATION CONFIGURATION
===============================================================================
Application credentials are REQUIRED if the task's appType is NOT 'nocredapp'.

If the task requires application credentials (appType != 'nocredapp'):
- Application config must be provided with:
    - appName: Application class name
    - appURL: Application URL (optional, can be empty string)
    - credentialType: Type of credentials
    - credentialValues: Actual credential key-value pairs
- OR applicationId if using existing saved application

If the task's appType is 'nocredapp':
- Application configuration can be omitted (pass None or empty)
- The system will automatically use the hardcoded nocredapp application structure:
  {
      "applicationType": "NoCredApp",
      "appURL": "",
      "credentialType": "NoCred",
      "credentialValues": {"Dummy": ""},
      "appTags": {"appType": ["nocredapp"], "environment": ["logical"], "execlevel": ["app"]}
  }

===============================================================================
TASK EXECUTION FLOW
===============================================================================
1. Receive task name and collected inputs
2. Check if any input depends on previous task output
3. For dependency inputs:
    a. Check if previous task output exists
    b. If not, execute previous task first
    c. Use real output as input value
4. Prepare execution payload with real data
5. Call task execution API
6. Parse and return execution results with output file URLs

===============================================================================
REQUEST BODY FORMAT
===============================================================================
    {
        "taskname": "TaskName",
        "application": {
            "appName": "ApplicationClassName",
            "appURL": "https://app.url.com",
            "credentialType": "CredentialTypeName",
            "credentialValues": {
                "key1": "value1",
                "key2": "value2"
            },
            "appTags": [Complete object from of 'appTags' from the task in the rule]
        },
        "taskInputs": {
            "inputs": {
                "InputName1": "value_or_file_url",
                "InputName2": "value_or_file_url"
            }
        }
    }
===============================================================================
Args:
    task_name: Name of the task to execute
    task_inputs: Dictionary containing key-value pairs of task inputs
                Format: {"input_name": "value" or file_url}
    application: Optional application configuration for tasks requiring credentials
                Format: {
                    "appName": "ApplicationClassName",
                    "appURL": "https://...",
                    "credentialType": "...",
                    "credentialValues": {...},
                    "appTags": [Complete object from of 'appTags' from the task in the rule]
                }
                OR {"applicationId": "existing-app-id", "appTags": [Complete object from of 'appTags' from the task in the rule]}

Returns:
    Dict containing:
    {
        "success": bool,
        "execution_status": "COMPLETED" | "FAILED",
        "task_name": str,
        "task_inputs": dict,
        "outputs": dict,  # Output file URLs and values
        "errors": list,
        "message": str,
        "next_action": str
    }
generate_design_notes_previewA

Generate design notes preview for user confirmation before actual creation.

DESIGN NOTES PREVIEW GENERATION

This tool generates a complete Jupyter notebook structure as a dictionary for user review. The MCP will create the full notebook content with 7 standardized sections based on rule context and metadata, then return it for user confirmation.

DESIGN NOTES TEMPLATE STRUCTURE REQUIREMENTS

The MCP should generate a Jupyter notebook (.ipynb format) with exactly 7 sections:

SECTION 1: Evidence Details

DESCRIPTION: System identification and rule purpose documentation

CONTENT REQUIREMENTS:

  • Table with columns: System | Source of data | Frameworks | Purpose

  • System: {TARGET_SYSTEM_NAME} (all lowercase")

  • Source: Always 'compliancecow'

  • Frameworks: Always '-'

  • Purpose: Use rule's purpose from metadata

  • RecommendedEvidenceName: {RULE_OUTPUT_NAME} (use rule's primary compliance output, exclude LogFile)

  • Description: Use rule description from metadata

  • Reference: Include actual API documentation links that the rule uses (extract from task specifications, no placeholder values)

FORMAT: Markdown cell with table and code blocks only

SECTION 2: Define the System Specific Data (Extended Data Schema)

DESCRIPTION: System-specific raw data structure definition with detailed breakdown

CONTENT REQUIREMENTS:

Step 2a: Inputs

  • Generate numbered list from rule's spec.inputs

  • Format: "{NUMBER}. {INPUT_NAME}({INPUT_DATA_TYPE}) - {INPUT_DESCRIPTION}"

  • Include all inputs with their types and purposes

Step 2b: API & Flow

  • Generate numbered list of API endpoints based on target system

  • Format: "{NUMBER}. {HTTP_METHOD} {URL} - {BRIEF_DESCRIPTION}"

  • Include only actual API endpoints that this specific rule uses for data collection

  • Extract from task specifications, not generic templates

Step 2c: Define the Extended Schema

  • Generate large JSON code block with actual API response structure

  • Use system-specific field names and realistic data values

  • Include all fields that will be processed by the rule

FORMAT: Markdown headers with detailed lists + large JSON code block

SECTION 3: Define the Standard Schema

DESCRIPTION: Standardized compliance data format documentation

CONTENT REQUIREMENTS:

  • Header explaining standard schema purpose

  • JSON code block with complete standardized structure containing:

  • System: based on target system (lowercase)

  • Source: Always 'compliancecow'

  • Resource info: ResourceID, ResourceName, ResourceType, ResourceLocation, ResourceTags, ResourceURL

  • System-specific data fields based on actual rule output columns, if unavailable then generate based on rule details

  • Compliance fields: ValidationStatusCode, ValidationStatusNotes, ComplianceStatus, ComplianceStatusReason

  • Evaluation and action fields: EvaluatedTime, UserAction, ActionStatus, ActionResponseURL (UserAction, ActionStatus, ActionResponseURL are empty by default)

Step 3a: Sample Data

  • Generate markdown table with ALL standard schema columns in same order - include all columns even if empty

  • Include three complete example rows with realistic, system-specific data

  • Use proper data formatting and realistic identifiers

FORMAT: JSON code block + comprehensive markdown table

SECTION 4: Describe the Compliance Taxonomy

DESCRIPTION: Status codes and compliance definitions

CONTENT REQUIREMENTS:

  • Table with columns: ValidationStatusCode | ValidationStatusNotes | ComplianceStatus | ComplianceStatusReason

  • ValidationStatusCode: CRITICAL FORMAT REQUIREMENT - Rule-specific codes must strictly follow this exact format:

    • Each word must be exactly 3-4 characters long

    • Words must be separated by underscores (_)

    • Use ALL UPPERCASE letters

    • Create codes that directly relate to the rule's compliance purpose

    • Examples: CODE_OWN_HAS_PR_REV (code ownership has pull request review), REPO_SEC_SCAN_PASS (repository security scan passed), AUTH_MFA_ENBL (authentication multi-factor enabled)

    • DO NOT use generic codes like "PASS" or "FAIL"

    • DO NOT exceed 4 characters per word

    • DO NOT use special characters other than underscores

    • Generate 4-6 different status codes covering various compliance scenarios

  • Detailed compliance reasons specific to the rule's purpose

  • Both COMPLIANT and NON_COMPLIANT scenarios

FORMAT: Markdown cell with table

SECTION 5: Calculation for Compliance Percentage and Status

DESCRIPTION: Percentage calculations and status logic

CONTENT REQUIREMENTS:

  • Header explaining compliance calculation methodology

  • Code cell with calculation logic:

  • TotalCount = Count of 'COMPLIANT' and 'NON_COMPLIANT' records

  • CompliantCount = Count of 'COMPLIANT' records

  • CompliancePCT = (CompliantCount / TotalCount) * 100

  • Status determination rules:

    • COMPLIANT: 100%

    • NON_COMPLIANT: 0% to less than 100%

    • NOT_DETERMINED: If no records are found

FORMAT: Markdown header cell + Code cell with calculation logic

SECTION 6: Describe (in words) the Remediation Steps for Non-Compliance

DESCRIPTION: Non-compliance remediation procedures

CONTENT REQUIREMENTS:

  • Can be "N/A" if no specific remediation steps apply

  • When applicable, provide:

  • Immediate Actions required

  • Short-term remediation steps

  • Long-term monitoring approaches

  • Responsible parties and timeframes

  • System-agnostic guidance that can be customized

FORMAT: Markdown cell with detailed remediation procedures

SECTION 7: Control Setup Details

DESCRIPTION: Rule configuration and implementation details

CONTENT REQUIREMENTS:

  • Table with two columns: Control Details | (Values)

  • Required fields (only these):

  • RuleName: Use actual rule name

  • PreRequisiteRuleNames: Default to 'N/A' or list dependencies

  • ExtendedSchemaRuleNames: Default to 'N/A' or list related rules

  • ApplicationClassName: Fetch all appType values from spec.tasks array, combine them, remove duplicates, and format as comma-separated values

  • PostSynthesizerName: Default to 'N/A' or specify if used

FORMAT: Markdown table with control configuration details

JUPYTER NOTEBOOK METADATA REQUIREMENTS

  • Include proper notebook metadata (colab, kernelspec, language_info)

  • Set nbformat: 4, nbformat_minor: 0

  • Use appropriate cell metadata with unique IDs for each section

  • Ensure proper markdown and code cell formatting

MCP CONTENT POPULATION INSTRUCTIONS

The MCP should extract the following information from the rule context:

  • Rule name, purpose, description from rule metadata

  • System name from appType (clean by removing connector suffixes like "-connector")

  • Task details from spec.tasks array

  • Input specifications from spec.inputs and spec.inputsMeta__

  • Output specifications from spec.outputsMeta__

  • Application connector information for control setup

  • API endpoints from task specifications (not generic placeholders)

CONTENT GENERATION GUIDELINES

  • Use realistic, system-specific examples that can be customized later

  • Include comments in code sections indicating customization points

  • Provide system-agnostic content that applies broadly

  • Use consistent naming conventions throughout all sections

  • Extract actual API documentation links from task specifications

  • Generate ValidationStatusCodes that are specific to the rule's compliance purpose

  • Ensure all sample data reflects the actual system being monitored

WORKFLOW

  1. MCP retrieves rule context from stored rule information

  2. MCP generates complete Jupyter notebook using template structure above

  3. MCP populates template with extracted rule metadata and calculated values

  4. MCP returns complete notebook structure as dictionary for user review

  5. User reviews and confirms the structure

  6. If approved, call create_design_notes() to actually save the notebook

ARGS

  • rule_name: Name of the rule for which to generate design notes preview

RETURNS

Dict containing complete notebook structure for user review and confirmation

create_design_notesA

Create and save design notes after user confirmation.

DESIGN NOTES CREATION:

This tool actually creates and saves the design notes after the user has reviewed and confirmed the preview structure from generate_design_notes_preview().

WORKFLOW:

  1. Before creating new design notes, call fetch_rule_design_notes() to check if already exist and continue the flow, if not then continue this flow

  2. User has already reviewed notebook structure from preview

  3. User confirmed the structure is acceptable

  4. This tool receives the complete design notes dictionary structure

  5. MCP saves the notebook and returns access details

Args: rule_name: Name of the rule for which to create design notes design_notes_structure: Complete Jupyter notebook structure as dictionary

Returns: Dict containing design notes creation status and access details

fetch_rule_design_notesA

Fetch and manage design notes for a rule.

WORKFLOW:

  1. CHECK EXISTING NOTES:

  • Always check if design notes exist for the rule first (whether user wants to create or view)

  • If found: Present complete notebook to user in readable format

  • If not found: Offer to create new ones

  1. IF NOTES EXIST:

  • Show complete notebook with all sections (this serves as the VIEW)

  • Ask: "Here are your design notes. Modify or regenerate?"

  1. USER OPTIONS:

  • MODIFY:

  1. Ask "Do you need any changes to the design notes?"

  2. If no changes needed: Get user confirmation, then call create_design_notes() to update

  3. If changes needed: Collect modifications, show preview, get confirmation, then call create_design_notes() to update

  • REGENERATE:

  1. Generate the design notes using generate_design_notes_preview()

  2. Show preview to user

  3. Get user confirmation

  4. If confirmed: Call create_design_notes() to save the regenerated design notes

  • CANCEL: End workflow

  1. IF NO NOTES EXIST:

  • Inform user no design notes found

  • Ask: "Create comprehensive design notes for this rule?"

  • If yes: Generate the design notes using generate_design_notes_preview()

  • Show preview to user

  • Get user confirmation

  • If confirmed: Call create_design_notes() to generate

KEY RULES:

  • MUST follow this workflow explicitly step by step

  • Always check for existing notes first whenever user asks about design notes (create or view)

  • ALWAYS get user confirmation before calling create_design_notes()

  • If any updates needed, explicitly call create_design_notes() tool to save changes

  • Present notes in Python notebook format

  • Use create_design_notes() for creation and updates

Args: rule_name: Name of the rule

Returns: Dict with success status, rule name, design notes content, and error details

generate_rule_readme_previewA

Generate README.md preview for rule documentation before actual creation.

RULE README GENERATION:

This tool generates a complete README.md structure as a string for user review. The MCP will create comprehensive rule documentation with detailed sections based on rule context and metadata, then return it for user confirmation.

README TEMPLATE STRUCTURE REQUIREMENTS:

The MCP should generate a README.md with exactly these sections:

SECTION 1: Rule Header

DESCRIPTION: Rule identification and overview CONTENT REQUIREMENTS:

  • Rule name as main title (# {RULE_NAME})

  • Brief description from rule metadata

  • Status badges (Version, Application Type, Environment)

  • Purpose statement

  • Last updated timestamp FORMAT: Markdown header with badges and overview

SECTION 2: Overview

DESCRIPTION: High-level rule explanation CONTENT REQUIREMENTS:

  • What this rule does (purpose and description)

  • Target system/application

  • Compliance framework alignment

  • Key benefits and use cases

  • When to use this rule FORMAT: Markdown sections with bullet points

SECTION 3: Rule Architecture

DESCRIPTION: Technical architecture and flow CONTENT REQUIREMENTS:

  • Rule flow diagram (text-based)

  • Task sequence and dependencies

  • Data flow: Input β†’ Processing β†’ Output

  • Integration points

  • Architecture decisions FORMAT: Markdown with code blocks for diagrams

SECTION 4: Inputs

DESCRIPTION: Detailed input specifications CONTENT REQUIREMENTS:

  • Table of all rule inputs with:

    • Input Name

    • Data Type

    • Required/Optional

    • Description

    • Default Value

    • Example Value

  • Input validation rules

  • File format specifications (for FILE inputs) FORMAT: Markdown table with detailed explanations

SECTION 5: Tasks

DESCRIPTION: Individual task breakdown CONTENT REQUIREMENTS:

  • For each task in the rule:

    • Task name and alias

    • Purpose and functionality

    • Input requirements

    • Output specifications

    • Processing logic overview

    • Error handling

  • Task execution order

  • Dependencies between tasks FORMAT: Markdown subsections for each task

SECTION 6: Outputs

DESCRIPTION: Rule output specifications CONTENT REQUIREMENTS:

  • Table of all rule outputs with:

    • Output Name

    • Data Type

    • Description

    • Format/Structure

    • Example Value

  • Output file formats and schemas

  • Success/failure indicators FORMAT: Markdown table with examples

SECTION 7: Configuration

DESCRIPTION: Rule configuration and setup CONTENT REQUIREMENTS:

  • Application type and environment settings

  • Execution level and mode

  • Required permissions and access

  • System prerequisites

  • Configuration examples

  • Environment-specific settings FORMAT: Markdown with code blocks

SECTION 8: Usage Examples

DESCRIPTION: Practical usage scenarios CONTENT REQUIREMENTS:

  • Basic usage example

  • Advanced configuration example

  • Common use cases

  • Best practices

  • Troubleshooting tips FORMAT: Markdown with code examples

SECTION 9: I/O Mapping

DESCRIPTION: Data flow mapping details CONTENT REQUIREMENTS:

  • Complete I/O mapping visualization

  • Rule input to task input mappings

  • Task output to task input mappings

  • Task output to rule output mappings

  • Data transformation explanations FORMAT: Markdown with formatted mapping table

SECTION 10: Troubleshooting

DESCRIPTION: Common issues and solutions CONTENT REQUIREMENTS:

  • Common error scenarios

  • Input validation failures

  • Task execution errors

  • Output generation issues

  • Performance considerations

  • Support and contact information FORMAT: Markdown FAQ-style sections

SECTION 11: Version History

DESCRIPTION: Change log and versioning CONTENT REQUIREMENTS:

  • Current version information

  • Version history table

  • Change descriptions

  • Migration notes

  • Deprecation warnings FORMAT: Markdown table with version details

SECTION 12: References

DESCRIPTION: Additional resources and links CONTENT REQUIREMENTS:

  • Related documentation links

  • Compliance framework references

  • API documentation

  • Support resources

  • Contributing guidelines FORMAT: Markdown bullet list with links

MARKDOWN FORMATTING REQUIREMENTS:

  • Use proper Markdown syntax

  • Include table of contents with links

  • Use code blocks for examples

  • Include badges and shields

  • Proper heading hierarchy (H1, H2, H3)

  • Use tables for structured data

  • Include horizontal rules for section separation

MCP CONTENT POPULATION INSTRUCTIONS: The MCP should extract the following information from the rule context:

  • Rule name, purpose, description from rule metadata

  • System name from appType (clean by removing connector suffixes)

  • Task details from spec.tasks array (name, alias, purpose, appTags)

  • Input specifications from spec.inputs object

  • Output specifications from spec.outputsMeta__

  • I/O mappings from spec.ioMap array

  • Environment and execution settings from labels

  • Application type and integration details

PLACEHOLDER REPLACEMENT RULES:

  • {RULE_NAME} = meta.name

  • {RULE_PURPOSE} = meta.purpose

  • {RULE_DESCRIPTION} = meta.description

  • {SYSTEM_NAME} = extracted from appType

  • {VERSION} = meta.version or "1.0.0"

  • {ENVIRONMENT} = meta.labels.environment[0]

  • {APP_TYPE} = meta.labels.appType[0]

  • {EXEC_LEVEL} = meta.labels.execlevel[0]

  • {TASK_COUNT} = len(spec.tasks)

  • {INPUT_COUNT} = len(spec.inputs)

  • {OUTPUT_COUNT} = len(spec.outputsMeta__)

  • {TIMESTAMP} = current ISO timestamp

CONTENT GUIDELINES:

  • Use clear, technical language

  • Include practical examples

  • Provide comprehensive coverage

  • Make it developer-friendly

  • Include troubleshooting help

  • Keep sections well-organized

  • Use consistent formatting

WORKFLOW:

  1. MCP retrieves rule context using fetch_rule() (ensure only the fetch_rule tool is called, not fetch_cc_rule)

  2. MCP extracts metadata and technical details

  3. MCP generates complete README.md content using template above

  4. MCP populates all placeholders with actual rule data

  5. MCP returns complete README content as string for user review

  6. User reviews and confirms the content

  7. If approved, call create_rule_readme() to actually save the README

Args: rule_name: Name of the rule for which to generate README preview

Returns: Dict containing complete README.md content as string for user review

create_rule_readmeA

Create and save README.md file after user confirmation.

README CREATION:

This tool actually creates and saves the README.md file after the user has reviewed and confirmed the preview content from generate_rule_readme_preview().

WORKFLOW:

  1. User has already reviewed README content from preview

  2. User confirmed the content is acceptable

  3. This tool receives the complete README.md content as string

  4. MCP saves the README file and returns access details

Args: rule_name: Name of the rule for which to create README readme_content: Complete README.md content as string

Returns: Dict containing README creation status and access details

update_rule_readmeB

Update existing README.md file with new content.

README UPDATE:

This tool updates an existing README.md file with new content. Useful for making changes after initial creation or updating documentation as rules evolve.

Args: rule_name: Name of the rule for which to update README updated_readme_content: Updated README.md content as string

Returns: Dict containing README update status and details

get_application_infoA

Get detailed information about an application, including supported credential types.

APPLICATION CREDENTIAL CONFIGURATION WORKFLOW:

  1. User selects "Configure new application credentials".

  2. Call this tool to retrieve application details and supported credential types.

  3. Present credential options to the user with:

    • Required attributes

    • Data type

    • If type is bytes β†’ must be Base64-encoded

  4. Collect credential values for the selected type.

  5. Validate that all required attributes are provided.

  6. Verify that each credential value matches its expected data type.

  7. Build the credential configuration and append it to the apps_config array.

DATA VALIDATION REQUIREMENTS:

  • All required attributes must be present.

  • Data type must match specification.

  • Bytes values must be Base64-encoded before saving.

Args: tag_name: The app tag name for retrieving application information

Returns: Dict containing application details and supported credential types

fetch_execution_progressA

Fetch execution progress for a running rule.

IMPORTANT FOR CLAUDE/CLIENT:

This tool returns a snapshot of current progress. To see real-time updates:

  1. Call this tool repeatedly every 1 seconds

  2. Check the "continue_polling" flag in response

  3. If continue_polling=true, call again after 1 seconds

  4. If continue_polling=false, execution is complete

DISPLAY INSTRUCTIONS FOR CLAUDE:

When displaying progress, REPLACE the previous output (don't append):

πŸ”„ Execution Progress (Live) ─────────────────────────────────

Show each task on ONE line that UPDATES in place: β€’ task_name (type) [progress_bar] XX% STATUS

Use these Unicode blocks for progress bars:

  • COMPLETED: 🟦 (blue blocks)

  • INPROGRESS: 🟩 (green blocks)

  • ERROR: πŸŸ₯ (red blocks)

  • PENDING: ⬜ (white blocks)

After each poll, REPLACE the entire progress display with new data. DO NOT show multiple versions of the same task.

EXAMPLE DISPLAY SEQUENCE: Poll 1: β€’ fetch_users (HTTP) ⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜ 0% PENDING β€’ process_data (Script) ⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜ 0% PENDING

Poll 2 (REPLACES above): β€’ fetch_users (HTTP) 🟩🟩🟩🟩⬜⬜⬜⬜⬜⬜ 40% INPROGRESS β€’ process_data (Script) ⬜⬜⬜⬜⬜⬜⬜⬜⬜⬜ 0% PENDING

Poll 3 (REPLACES above): β€’ fetch_users (HTTP) 🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦 100% COMPLETED β€’ process_data (Script) 🟩🟩🟩⬜⬜⬜⬜⬜⬜⬜ 30% INPROGRESS

RESPONSE FLAGS:

  • continue_polling: true = keep polling every 1 seconds

  • continue_polling: false = execution complete, show final summary

  • display_mode: "replace" = replace previous display

UI DISPLAY REQUIREMENT:

  • The file URL must ALWAYS be displayed to the user in the UI, allowing the user to view or download the file directly.

Args: rule_name: Rule being executed execution_id: ID from execute_rule()

Returns: Dict with progress data and polling instructions

fetch_output_fileA

Fetch and display content of an output file from rule execution.

FILE OUTPUT HANDLING:

WHEN TO USE:

  • Rule execution output contains file URLs

  • User requests to view specific file content

  • Files contain reports, logs, compliance data, or analysis results

CONTENT DISPLAY LOGIC:

  • If file size < 10KB: Show entire file content

  • If file size >= 10KB: Show only first 3 records/lines with user-friendly message

  • Supported formats: JSON, CSV, Parquet, and other text files

  • Always return file format extracted from filename

  • Provide clear user messaging about content truncation

  • CRITICAL: If content is truncated or full content, include truncation message with the display_content

  • The file URL (file_url) must ALWAYS be displayed to the user in the UI, allowing the user to view or download the file directly.

MANDATORY CONTENT DISPLAY FORMAT:

  • FileName: [extracted from file_url]

  • Format: [file format from file_format]

  • Message: [truncation status or completion message if applicable user_message]

  • Content: [display_content based on file format show the entire display_content]

  • File URL: [always show the file_url in the UI so the user can view or download the file] Args: file_url: URL of the file to fetch and display

Returns: Dict containing file content, metadata, and display information

fetch_applicationsB

Fetch all available applications from the system.

Returns: Dict containing list of applications with their details

prepare_applications_for_executionA

Analyze rule tasks and prepare application configuration requirements for execution.

This tool helps users understand what applications are needed and whether they can share applications across multiple tasks.

WHEN TO USE:

  • Before calling execute_rule() to understand application requirements

  • To identify if multiple tasks can share the same application

  • To determine if unique identifiers are needed when using different applications for same appType

NOTE: This tool is optional. Rules with only 'nocredapp' tasks can be executed directly without any application configuration. Use this tool only when tasks require credentials.

APPLICATION SHARING SCENARIOS (when applications are needed):

  1. Shared Application: User wants same credentials for all tasks of an appType

    • Single application config with basic appTags (just appType)

    • One application covers multiple tasks

  2. Separate Applications: User needs different credentials per task

    • Must add unique identifier key (e.g., "purpose") to task appTags

    • Each application config must include matching unique identifier

  3. No Application Needed: All tasks have 'nocredapp' appType

    • Skip application configuration entirely

    • Call execute_rule() with an empty applications list

WORKFLOW:

  1. Call this tool with rule_name

  2. Review which tasks need applications (if any)

  3. If no tasks need applications (all nocredapp): Skip to step 6

  4. For tasks with same appType, decide: share or separate?

  5. If sharing: Provide one application config per appType If separate: Add unique identifiers and provide separate configs

  6. Call execute_rule() with configured applications (or empty list for nocredapp rules)

Args: rule_name: Name of the rule to analyze

Returns: Dict with analysis results and configuration guidance

add_unique_identifier_to_taskA

Add a unique identifier key-value pair to a specific task's appTags.

Use this when multiple tasks share the same appType but need DIFFERENT applications. The unique identifier allows the system to match each application to its specific task.

WHEN TO USE:

  • After prepare_applications_for_execution() identifies tasks needing differentiation

  • When user chooses "separate applications" option for tasks with same appType

  • Before configuring separate applications for same appType tasks

NOT NEEDED WHEN:

  • User wants to SHARE the same application across multiple tasks

  • Task already has a unique appType (no other tasks share it)

WORKFLOW:

  1. Call prepare_applications_for_execution()

  2. If user chooses separate applications for an appType:

    • Call this tool for each task to add unique identifier

    • Use same key but different values (e.g., "purpose": "source" vs "purpose": "target")

  3. Configure applications with matching identifiers

Args: rule_name: Name of the rule containing the task task_alias: Alias of the task to update identifier_key: Unique identifier key (e.g., "purpose", "sourceSystem") identifier_value: Value for the identifier (e.g., "source-repo", "production-db")

Returns: Dict with update status and guidance

check_rule_statusA

Quick status check showing what's been collected and what's missing. Perfect for resuming in new chat windows.

ENHANCED WITH AUTO-INFERENCE STATUS ANALYSIS:

  • Ignores stored status/phase fields and analyzes actual rule structure

  • Auto-detects completion status based on rule content (same logic as create_rule)

  • Calculates real-time progress percentage from actual components

  • Determines next actions based on what's actually missing

  • Provides accurate resumption guidance regardless of stored metadata

  • Perfect for cross-chat resumption with reliable state detection

AUTO-INFERENCE LOGIC:

  • Analyzes spec.tasks, spec.inputs, spec.inputsMeta__, spec.ioMap, spec.outputsMeta__

  • Calculates completion based on actual content, not stored fields

  • Determines status: DRAFT β†’ READY_FOR_CREATION β†’ ACTIVE

  • Provides accurate progress: 5% β†’ 25% β†’ 85% β†’ 100%

  • Identifies exactly what components are missing

Args: rule_name: Name of the rule to check status for

Returns: Dict with auto-inferred status information and accurate next action recommendations

fetch_rules_suggestionsA

Tool-based version of fetch_rules_and_tasks_suggestions for improved compatibility and prevention of duplicate rule creation.

This tool serves as the initial step in the rule creation process. It helps determine whether the user's proposed use case matches any existing rule in the catalog.

PURPOSE:

  • To analyze the user's use case and avoid duplicate rule creation by identifying the most suitable existing rule based on its name, description, and purpose.

  • NEW: Check for partially developed rules in local system before allowing new rule creation

  • NEW: Present resumption options if incomplete rules are found to prevent duplicate work

WHEN TO USE:

  • As the first step before initiating a new rule creation process.

  • When the user wants to check if similar rules already exist by leveraging the Rules Suggestions API, instead of browsing the entire catalog manually.

  • When verifying if a suggested rule can be reused or adapted rather than creating one from scratch.

  • When checking for incomplete local rules that should be resumed instead of creating new ones.

🚫 DO NOT USE THIS TOOL FOR:

  • Checking what rules are available in the ComplianceCow system.

  • This tool only works with the rule catalog (not the entire ComplianceCow system).

  • The catalog contains only rules that are published and available for reuse in the catalog.

  • For direct ComplianceCow system lookups, use dedicated system tools instead:

  • fetch_cc_rule_by_name

  • fetch_cc_rule_by_id

MANDATORY STEP: CONTEXT SUMMARY

  • Before calling the rule catalog API, always rewrite the user’s raw requirement into a single-paragraph descriptive summary string (not bullet points, not verbatim input).

  • The summary must capture the essence of the requirement in clear, natural language.

  • This summary string is what will be passed to fetch_rules_and_tasks_suggestions.

  • Example: User input: "Use GitHub GraphQL API to fetch merged PRs and check if approvals >= 2" Summary: "The proposed rule validates compliance for GitHub Pull Requests by retrieving all merged PRs through the GitHub GraphQL API, checking whether the number of approvers meets a required threshold, and marking them as compliant or non-compliant."

WHAT IT DOES:

  • Generates a concise summary string from the user's intent or requirements.

  • Calls the Rules Suggestions API with this summary string to retrieve a narrowed list of relevant rules.

  • Performs intelligent matching using metadata (name, description, purpose) from the suggested rules against the user-provided use case details.

  • Uses semantic pattern recognition to identify similar or related rules, even across different systems (e.g., AzureUserUnusedPermission vs SalesforceUserUnusedPermissions).

  • Analyzes the readmeData field from the fetch_rule() response to validate the rule's suitability for the user's use case.

IF A MATCHING RULE IS FOUND:

  • Retrieves complete details via fetch_rule().

  • If the readmeData field is available in the fetch_rule() response, Performs README-based validation using the readmeData field from the fetch_rule() response to assess its suitability for the user’s use case.

  • If suitable:

  • Returns the rule with full metadata, explanation, and the analysis report.

  • If not suitable:

  • Informs the user that the rule's README content does not align with the intended use case.

  • Prompts the user with clear next-step options:

    • "The rule's README content does not align with your use case. Please choose one of the following options:"

    • Customize the existing rule

    • Evaluate alternative matching rules

    • Proceed with new rule creation

  • Waits for the user's choice before proceeding.

IF A SIMILAR RULE EXISTS FOR AN ALTERNATE TECHNOLOGY STACK:

  • Detects rules with the same logic but built for a different platform or system (e.g., AzureUserUnusedPermission for SalesforceUserUnusedPermissions)

  • If the readmeData field is available in the fetch_rule() response, Retrieves and analyzes the readmeData from the fetch_rule() response to compare the implementation details against the user's proposed use case

  • Based on the comparison:

    • If the README content matches or is mostly reusable, suggest using the existing rule structure and logic as a foundation to create a new rule tailored to the user's target system

    • If the README content does not match or is not suitable, clearly inform the user and recommend either modifying the logic significantly or proceeding with a completely new rule from scratch

IF NO SUITABLE RULE IS FOUND:

  • Clearly informs the user that no relevant rule matches the proposed use case

  • Suggests continuing with new rule creation

  • Optionally highlights similar rules that can be used as a reference

MANDATORY STEPS: README VALIDATION:

  • Always retrieve and analyze readmeData from fetch_rule().

  • Ensure the rule's logic, behavior, and intended use align with the user's proposed use case.

README ANALYSIS REPORT:

  • Generate a clear and concise report for each readmeData analysis that classifies the result as a full match, partially reusable, or not aligned.

  • Present this report to the user for review.

USER CONFIRMATION BEFORE PROCEEDING: When analyzing a README file:

  • If no relevant rule matches the proposed use case, or if the README is deemed unsuitable, the tool must pause and request explicit user confirmation before proceeding further.

  • The tool should:

  • Clearly inform the user that no matching rule was found or the README is not appropriate.

  • Suggest creating a new rule as the next step.

  • Optionally recommend similar existing rules that can serve as references to help the user craft the new rule.

ITERATE UNTIL MATCH:

  • Repeat the above steps until a suitable rule is found or all options are exhausted.

CROSS-PLATFORM RULE HANDLING:

  • For rules from a different stack:

  • If reusable: suggest customization

  • If not reusable: recommend new rule creation

Returns:

  • A single rule object with full metadata and verified README match β€” if an exact match is found

  • A similar rule suggestion with customization options β€” if a cross-system match is found (e.g., AzureUserUnusedPermission vs SalesforceUserUnusedPermissions)

  • A message indicating no suitable rule found β€” with next steps and guidance to create a new rule

get_task_detailsB

Tool-based version of get_task_details for improved compatibility.

DETAILED TASK ANALYSIS REQUIREMENTS:

  • Use this tool if the tasks://details/{task_name} resource is not accessible

  • Extract complete input/output specifications with template information

  • Review detailed capabilities and requirements from the full README

  • Identify template-based inputs (those with the templateFile property)

  • Analyze appTags to determine the application type

  • Review all metadata and configuration options

  • Use this information for accurate task matching and rule structure creation

INTENTION-BASED OUTPUT CHAINING:

  • ANALYZE output purpose: Is this meant for direct user consumption or further processing?

  • ASSESS completion level: Does this output fulfill the user's end goal or serve as a stepping stone?

  • EVALUATE consolidation needs: Are multiple outputs meant to be combined for complete picture?

  • DETERMINE transformation requirements: Does raw output need formatting for usability?

WORKFLOW GAP DETECTION:

  • IDENTIFY outputs that represent partial solutions to user problems

  • DETECT outputs that split information requiring reunification

  • RECOGNIZE outputs that extract data without presenting insights

  • FLAG outputs that validate without providing actionable summaries

COMPLETION INTENTION MATCHING:

  • SUGGEST tasks that transform intermediate outputs into final deliverables

  • RECOMMEND tasks that consolidate split information into unified reports

  • PROPOSE tasks that add analysis layer to raw validation results

  • ENSURE suggested tasks align with user's stated end goals

IMPORTANT (MANDATORY BEHAVIOR): If the requested task is not found with the user's specification, the system MUST:

  1. Prompt the user to choose how to proceed including the below option.

  • Option: Create task development Ticket.

  1. Wait for the user's response before taking any further action.

  2. If the user chooses to create a task development ticket, call create_support_ticket() via the MCP tool, collecting the required input details from the user before submitting.

Args: task_name: The name of the task for which to retrieve details

Returns: A dictionary containing the complete task information if found, OR executes the user-selected alternative approach, OR creates a support ticket (with collected details) if chosen

create_ruleA

Create a rule with the provided structure.

COMPLETE RULE CREATION PROCESS WITH PROGRESSIVE SAVING:

This tool now handles both initial rule creation and progressive updates during the rule creation workflow. It intelligently detects the completion status and sets appropriate metadata automatically. It returns the URL to view the rule in the UI once it is created display the URL in chat.

ENHANCED FOR PROGRESSIVE SAVING:

  • Automatically detects rule completion status based on rule structure content

  • Determines if rule is in-progress, ready for execution, or needs more inputs

  • Handles both initial creation and updates of existing rules

  • No additional parameters needed - analyzes rule structure intelligently

  • Maintains all existing validation and creation logic

  • Preserves all original docstring instructions and requirements

CRITICAL REQUIREMENT - INPUTS META:

  • spec.inputsMeta__ is mandatory for all rules, and rule creation cannot proceed without it.

AUTOMATIC STATUS DETECTION:

  • DRAFT: Rule has tasks but missing inputs or I/O mapping (5-85% complete)

  • READY_FOR_CREATION: All inputs collected but I/O mapping incomplete (85% complete)

  • ACTIVE: Complete rule with tasks, inputs, and I/O mapping (100% complete)

RULE COMPLETION ANALYSIS:

  • Checks if tasks are defined in spec.tasks

  • Validates that spec.inputsMeta__ exists

  • Counts collected inputs in spec.inputs vs spec.inputsMeta__

  • Validates I/O mapping presence and completeness in spec.ioMap

  • Analyzes outputsMeta__ for mandatory compliance outputs

  • Sets appropriate status and creation phase automatically

PROGRESSIVE CREATION PHASES (Auto-detected):

  1. "initialized" - Basic rule info provided (5%)

  2. "tasks_selected" - Tasks chosen and defined (25%)

  3. "collecting_inputs" - Individual inputs being collected (25-85%)

  4. "inputs_collected" - All inputs gathered, ready for I/O mapping (85%)

  5. "completed" - Final rule creation complete with I/O mapping (100%)

ORIGINAL REQUIREMENTS MAINTAINED:

  • All existing validation rules still apply

  • Task alias validation in I/O mappings preserved

  • Primary app type determination logic maintained

  • Mandatory output requirements (CompliancePCT_, ComplianceStatus_, LogFile)

  • YAML preview and user confirmation workflow preserved

  • All existing error handling and validation checks

CRITICAL: This tool should be called:

  1. After planning phase to create initial rule structure

  2. After each input collection to update rule progressively

  3. After input verification to finalize rule with I/O mapping

  4. Rule status and progress automatically detected each time

PRE-CREATION REQUIREMENTS (Original):

  1. spec.inputsMeta__ must be defined and contain valid input definitions

  2. All inputs must be collected through systematic workflow

  3. User must provide input overview confirmation

  4. All template inputs processed via collect_template_input()

  5. All parameter values collected and verified

  6. User must confirm all input values before rule creation

  7. Primary application type must be determined

  8. Rule structure must be shown to user in YAML format for final approval

STEP 1 - PRIMARY APPLICATION TYPE DETERMINATION (Preserved): Before creating rule structure, determine primary application type:

  1. Collect all unique appType tags from selected tasks

  2. Filter out 'nocredapp' (dummy placeholder value)

  3. Handle app type selection:

    • If only one valid appType: Use automatically

    • If multiple valid appTypes: Ask user to choose primary application

    • If no valid appTypes (all were nocredapp): Use 'generic' as default

  4. Set primary app type for appType, annotateType, and app fields (single value arrays)

STEP 2 - RULE STRUCTURE WITH TASK ALIASES (Preserved):

    apiVersion: rule.policycow.live/v1alpha1
    kind: rule
    meta:
        name: MeaningfulRuleName # Simple name. Without special characters and white spaces
        purpose: Clear statement based on user breakdown
        description: Detailed description combining all steps
        labels:
            appType: [PRIMARY_APP_TYPE_FROM_STEP_1] # Single value array CRITICAL: Must be extracted from spec.tasks[].appTags.appType - NEVER use random values or user requirements
            environment: [logical] # Array
            execlevel: [app] # Array
        annotations:
            annotateType: [PRIMARY_APP_TYPE_FROM_STEP_1] # Same as appType - MUST match a task's appType
    spec:
        inputs:
        InputName: [ACTUAL_USER_VALUE_OR_FILE_URL]  # Use original or unique names based on conflicts, omit duplicates
        inputsMeta__:
        - name: InputName             # unique name for the input
        description:                # purpose of the input
        dataType: FILE|HTTP_CONFIG|STRING|INT|FLOAT|BOOLEAN|DATE|DATETIME
        repeated:                   # true = multiple values allowed, false = single value
        allowedValues:              # if repeated=true: comma-separated input is split into array
        required:                   # value must be taken from task details.
        defaultValue: [ACTUAL_USER_VALUE] #values are collected from users, If the dataType is FILE or HTTP_CONFIG then the value should be filepath URL.
        format: [ACTUAL_FILE_FORMAT]      # only include for FILE types (json, yaml, toml, xml, etc.)
        showField: true                   # true = most important field, false = optional/less important
        outputsMeta__:
        - name: FinalOutput
        dataType: FILE|STRING|INT|FLOAT|BOOLEAN|DATE|DATETIME
        required: true
        defaultValue: [ACTUAL_RULE_OUTPUT_VALUE]
        tasks:
        - name: Step1TaskName # Original task names
        alias: step1 # Meaningful task aliases (simple descriptors)
        type: task
        appTags:
            appType: [COPY_FROM_TASK_DEFINITION] # Keep original task appType
            environment: [logical] # Array
            execlevel: [app] # Array
        purpose: What this task does for Step 1
        - name: Step2TaskName
        alias: validation # Another meaningful alias
        type: task
        appTags:
            appType: [COPY_FROM_TASK_DEFINITION]
            environment: [logical] # Array
            execlevel: [app] # Array
        purpose: What this task does for validation
        ioMap:
        - step1.Input.TaskInput:=*.Input.InputName  # Use task aliases in I/O mapping
        - validation.Input.TaskInput:=step1.Output.TaskOutput
        # MANDATORY: Always include these three outputs from the last task
        - '*.Output.FinalOutput:=validation.Output.TaskOutput'
        - '*.Output.CompliancePCT_:=validation.Output.CompliancePCT_'    # Compliance percentage from last task
        - '*.Output.ComplianceStatus_:=validation.Output.ComplianceStatus_'  # Compliance status from last task
        - '*.Output.LogFile:=validation.Output.LogFile'  # Log file from last task

STEP 3 - I/O MAPPING WITH TASK ALIASES (Preserved):

  • Use golang-style assignment: destination:=source

  • 3-part structure: PLACE.DIRECTION.ATTRIBUTE_NAME

  • Always use EXACT attribute names from task specifications

  • Use meaningful task aliases instead of generic names

  • Ensure sequential data flow: Rule β†’ Task1 β†’ Task2 β†’ Rule

  • Mandatory compliance outputs from last task

STEP 4 - inputsMeta__ Cleanup: In spec.inputsMeta__, retain only the entries whose keys exist in spec.inputs. Remove any fields in spec.inputsMeta__ that are not present in spec.inputs.

VALIDATION CHECKLIST (Preserved): β–‘ Rule structure validation against schema β–‘ Task alias validation in I/O mappings β–‘ Primary app type determination β–‘ Input/output specifications validation β–‘ Mandatory compliance outputs present β–‘ Sequential data flow in I/O mappings

Args: rule_structure: Complete rule structure with any level of completion

Returns: Result of rule creation including auto-detected status and completion level

fetch_ruleB

Fetch rule details by rule name.

Args: rule_name: Name of the rule to retrieve

Returns: Dict containing complete rule structure and metadata

get_rules_summaryA

Tool-based version of get_rules_summary for improved compatibility and prevention of duplicate rule creation.

This tool serves as the initial step in the rule creation process. It helps determine whether the user's proposed use case matches any existing rule in the catalog.

PURPOSE:

  • To analyze the user's use case and avoid duplicate rule creation by identifying the most suitable existing rule based on its name, description, and purpose.

  • NEW: Check for partially developed rules in local system before allowing new rule creation

  • NEW: Present resumption options if incomplete rules are found to prevent duplicate work

WHEN TO USE:

  • As the first step before initiating a new rule creation process

  • When the user wants to retrieve and review all available rules in the catalog

  • When verifying if a similar rule already exists that can be reused or customized

  • NEW: When checking for incomplete local rules that should be resumed instead of creating new ones

🚫 DO NOT USE THIS TOOL FOR:

  • Checking what rules are available in the ComplianceCow system.

  • This tool only works with the rule catalog (not the entire ComplianceCow system).

  • The catalog contains only rules that are published and available for reuse in the catalog.

  • For direct ComplianceCow system lookups, use dedicated system tools instead:

  • fetch_cc_rule_by_name

  • fetch_cc_rule_by_id

WHAT IT DOES:

  • Retrieves the full list of rules from the catalog with simplified metadata (name, purpose, description)

  • Performs intelligent matching using metadata (name, description, purpose) with user-provided use case details

  • Uses semantic pattern recognition to find similar rules, even across different systems (e.g., AzureUserUnusedPermission vs SalesforceUserUnusedPermissions)

IF A MATCHING RULE IS FOUND:

  • Retrieves complete details via fetch_rule().

  • If the readmeData field is available in the fetch_rule() response, Performs README-based validation using the readmeData field from the fetch_rule() response to assess its suitability for the user’s use case.

  • If suitable:

  • Returns the rule with full metadata, explanation, and the analysis report.

  • If not suitable:

  • Informs the user that the rule's README content does not align with the intended use case.

  • Prompts the user with clear next-step options:

    • "The rule's README content does not align with your use case. Please choose one of the following options:"

    • Customize the existing rule

    • Evaluate alternative matching rules

    • Proceed with new rule creation

  • Waits for the user's choice before proceeding.

IF A SIMILAR RULE EXISTS FOR AN ALTERNATE TECHNOLOGY STACK:

  • Detects rules with the same logic but built for a different platform or system (e.g., AzureUserUnusedPermission for SalesforceUserUnusedPermissions)

  • If the readmeData field is available in the fetch_rule() response, Retrieves and analyzes the readmeData from the fetch_rule() response to compare the implementation details against the user's proposed use case

  • Based on the comparison:

    • If the README content matches or is mostly reusable, suggest using the existing rule structure and logic as a foundation to create a new rule tailored to the user's target system

    • If the README content does not match or is not suitable, clearly inform the user and recommend either modifying the logic significantly or proceeding with a completely new rule from scratch

IF NO SUITABLE RULE IS FOUND:

  • Clearly informs the user that no relevant rule matches the proposed use case

  • Suggests continuing with new rule creation

  • Optionally highlights similar rules that can be used as a reference

MANDATORY STEPS: README VALIDATION:

  • Always retrieve and analyze readmeData from fetch_rule().

  • Ensure the rule's logic, behavior, and intended use align with the user's proposed use case.

README ANALYSIS REPORT:

  • Generate a clear and concise report for each readmeData analysis that classifies the result as a full match, partially reusable, or not aligned.

  • Present this report to the user for review.

USER CONFIRMATION BEFORE PROCEEDING: When analyzing a README file:

  • If no relevant rule matches the proposed use case, or if the README is deemed unsuitable, the tool must pause and request explicit user confirmation before proceeding further.

  • The tool should:

  • Clearly inform the user that no matching rule was found or the README is not appropriate.

  • Suggest creating a new rule as the next step.

  • Optionally recommend similar existing rules that can serve as references to help the user craft the new rule.

ITERATE UNTIL MATCH:

  • Repeat the above steps until a suitable rule is found or all options are exhausted.

CROSS-PLATFORM RULE HANDLING:

  • For rules from a different stack:

  • If reusable: suggest customization

  • If not reusable: recommend new rule creation

Returns:

  • A single rule object with full metadata and verified README match β€” if an exact match is found

  • A similar rule suggestion with customization options β€” if a cross-system match is found (e.g., AzureUserUnusedPermission vs SalesforceUserUnusedPermissions)

  • A message indicating no suitable rule found β€” with next steps and guidance to create a new rule

execute_ruleA

RULE EXECUTION WORKFLOW:

PREREQUISITE STEPS: 0. MANDATORY: Check rule status to ensure rule is fully developed before execution

  1. User chooses to execute rule after creation

  2. Extract unique appTags from selected tasks (excluding 'nocredapp')

  3. APPLICATION CONFIGURATION (OPTIONAL - only for tasks requiring credentials): For tasks that need application credentials:

    • Fetch available applications via get_applications_for_tag().

    • Present them to the user for manual selection.

    • User decides to: a. Use an existing application, or b. Run with new credentials (not persisted or saved as an application).

    • Proceed after user confirmation.

    Note: Rules with only 'nocredapp' tasks can be executed without any application configuration.

APPLICATION-TASK MATCHING LOGIC (when applications are needed):

  • Applications are matched to tasks via 'appTags' labels

  • Tasks with 'nocredapp' appType do not require application configuration

  • SHARED APPLICATION SUPPORT: A single application CAN be used for multiple tasks if the user confirms they want to share the same credentials

  • When multiple tasks share the same appType AND require DIFFERENT applications, unique identifier key-value pairs MUST be added to distinguish them

MATCHING SCENARIOS:

  1. One application per task: Each task has unique appType β†’ straightforward matching

  2. Shared application: Multiple tasks share same appType AND same application

    • User confirms: "Use same application for all [appType] tasks? (yes/no)"

    • If yes: Single application covers all matching tasks

    • Application appTags should match the common appType

  3. Multiple applications for same appType: Different credentials needed for different tasks

    • Add unique identifier key (e.g., "purpose", "sourceSystem") to distinguish

    • Each application's appTags must include the unique identifier matching its target task

APPLICATION CONFIGURATION FORMAT (when needed): For existing application (can be shared across multiple tasks): json [ { "applicationType": "[application_class_name from fetch_applications(appType)]", "applicationId": "[Actual application ID chosen by user]", "appTags": "[Complete object from rule spec.tasks[].appTags]" } ]

For new credentials: json [ { "applicationType": "[application_class_name from fetch_applications(appType)]", "appURL": "[Application URL from user (optional - can be empty string)]", "credentialType": "[User chosen credential type]", "credentialValues": { "[User provided credentials]" }, "appTags": "[Complete object from rule spec.tasks[].appTags]" } ]

WORKFLOW FOR MULTIPLE TASKS WITH SAME APPTYPE:

  1. Detect tasks sharing same appType (excluding 'nocredapp')

  2. Ask user: "Tasks [task1, task2] both require [appType]. Options: a) Use SAME application/credentials for all tasks b) Use DIFFERENT applications (requires unique identifiers)"

  3. If SAME: User provides one application config with basic appTags

  4. If DIFFERENT:

    • Prompt for unique identifier key (e.g., "purpose", "sourceSystem")

    • User provides separate application configs with unique identifier values

    • Update task appTags with matching unique identifiers

  5. Build applications array (if needed) β†’ get user confirmation

  6. Additional Inputs (optional):

    • Ask user: "Do you want to specify a date range for this execution?"

    • From Date (format: YYYY-MM-DD) - optional

    • To Date (format: YYYY-MM-DD) - optional

  7. Final confirmation β†’ execute rule

  8. If execution starts successfully β†’ call fetch_execution_progress()

  9. Rule Output File Display Process: a. Extract task outputs from execution results b. MANDATORY: Show output in this format: - TaskName: [task_name] - Files: [list of files] c. Ask: "View file contents? (yes/no)" d. If yes: Call fetch_output_file() for each requested file e. Display results with formatting

  10. Rule Publication (optional):

  • Ask user: "Do you want to publish this rule to make it available in ComplianceCow system? (yes/no)"

  • If yes: Call publish_rule() to publish the rule

  • If no: End workflow

UI DISPLAY REQUIREMENT:

  • The file URL must ALWAYS be displayed to the user in the UI, allowing the user to view or download the file directly.

CRITICAL: rule_inputs MUST be the complete spec.inputsMeta__ objects with ALL original fields (name, description, dataType, repeated, allowedValues, required, defaultValue, format, showField, explanation) plus the 'value' field. DO NOT send trimmed objects with only name/dataType/value.

MANDATORY: The 'value' field content MUST also be copied to the 'defaultValue' field. Both fields must contain identical values. Example: if value="CSV", then defaultValue must also be "CSV".

Args: rule_name: The name of the rule to be executed. from_date: (Optional) Start date provided by the user in the format YYYY-MM-DD. to_date: (Optional) End date provided by the user in the format YYYY-MM-DD. rule_inputs: Complete spec.inputsMeta__ objects with ALL fields plus 'value' field, and 'defaultValue' set to same value as 'value'. applications: Application configuration details. For rules with only 'nocredapp' tasks, pass an empty list and the system will automatically use the hardcoded nocredapp application structure. is_application_data_provided_by_user (bool): Indicates whether application data was provided by the user. - Set to True if user provided or configured application details during execution. - Set to False if using nocredapp (empty applications list) or pre-existing applications.

Returns: Dict with execution results

configure_rule_output_schemaA

PREREQUISITE β€” MUST RUN FIRST (NON-SKIPPABLE) This tool is a hard prerequisite and MUST be executed successfully before the prepare_input_collection_overview() tool (and any downstream rule-creation or evaluation steps). If this tool has not run or did not complete, the workflow MUST fail fast with an explicit error.

PURPOSE Establish the rule's output schema policy for ComplianceCow and apply any required transformations. In ComplianceCow, we maintain a standard format for storing evidence records. The user MUST choose one of the following rule output options:

  1. Standard schema only (ComplianceCow structured response fields)

  2. Extended schema only (all fields from the source response)

  3. Both standard + extended

USER PROMPT (MANDATORY β€” NEVER SKIPPABLE) The workflow MUST always pause and explicitly prompt the user before proceeding.
This step CANNOT be bypassed, defaulted, auto-selected, or inferred.
If the user has not actively selected one of (a), (b), or (c), this tool MUST fail fast with a clear error message and stop execution.

VALIDATION & ENFORCEMENT

  • This tool is NON-SKIPPABLE. If not executed, or if the user does not provide an explicit choice (a/b/c), the workflow MUST stop immediately with an error.

  • No implicit defaults, assumptions, or auto-selections are allowed.

  • Mandatory Key mapping rules still apply if Standard schema is chosen.

BEHAVIOR BY SELECTION

A) If user selects STANDARD ONLY:

  • If the pipeline already ends with a Transformation task, reuse the existing Transformation task instead of appending a new one.

  • Otherwise, append a Transformation task at the END of the selected task pipeline.

  • In the Transformation task, map ALL Mandatory Keys (listed below).

  • Values for these keys MUST be taken from the pipeline's input file(s) and/or upstream task outputs, following the Deeper Analysis Rules.

  • Continue collecting inputs for the Transformation task using: collect_template_input() or collect_parameter_input().

  • For each input that requires user guidance, call: get_template_guidance('{task.name}', '<input_name>') to display the expected input format to the user.

  • Ask the user to review and confirm OR edit the configuration before proceeding.

  • Do not proceed unless all Mandatory Keys are mapped and the configuration is confirmed (fail fast with guidance).

B) If user selects EXTENDED ONLY:

  • The Extended schema is a NON-STANDARD structure. It preserves the raw fields from the source response without enforcing ComplianceCow's standard schema format or mandatory key order.

  • Use the LAST task's output directly as the Extended schema output.

  • No mandatory field ordering or schema enforcement is applied β€” the structure is kept as-is for completeness and traceability.

C) If user selects BOTH:

  • Perform all steps from (A) to create the Standard schema:

  • Append a Transformation task at the END of the selected task pipeline.

  • Map ALL Mandatory Keys in the exact required order.

  • Include as needed for compliance.

  • Also add the Extended schema as a NON-STANDARD structure:

  • Create exactly ONE output field named: ExtendedData_. MUST be determinable from the use case (e.g., source, resource, or input artifact name).

  • Map the SAME LAST task output that is used as the input to the Transformation task into ExtendedData_.

  • Do NOT create duplicate extended outputs (for example, do not add both ExtendedData_JSONToCSV and ConvertedCSVFile if they contain the same data). Only ExtendedData_ must exist.

  • Continue collecting inputs for the Transformation task using: collect_template_input() or collect_parameter_input().

  • For each input that requires user guidance, call: get_template_guidance('{task.name}', '<input_name>') to display the expected input format to the user.

  • Ask the user to review and confirm OR edit the configuration before proceeding.

  • Do not proceed unless:

  • All Mandatory Keys are mapped and validated in order

  • Configuration is confirmed by the user

DEEPER ANALYSIS RULES

  • Always extract and map the core Mandatory Keys required for compliance.

  • For , determine the minimal required fields based on the user's specific use case and map them under the Standard schema.

  • If additional fields are critical for the use case, map them explicitly into the Standard schema.

  • If fields are non-critical but useful, preserve them under ExtendedData_<filename>.

  • If MCP cannot store certain fields, the tool MUST explain the omission clearly to the user before proceeding and request confirmation if needed.

MANDATORY KEYS (MUST ALWAYS BE MAPPED β€” IN THIS EXACT ORDER)

  • System

  • Source

  • ResourceID

  • ResourceName

  • ResourceType

  • ResourceLocation

  • ResourceTags

  • <Important Keys Based On User's Use Case> (for example: fields from the response file such as user_id, username, email, license_type, assigned_date, last_login_date, last_activity_date)

  • ValidationStatusCode

  • ValidationStatusNotes

  • ComplianceStatus

  • ComplianceStatusReason

  • EvaluatedTime

  • UserAction

  • ActionStatus

  • ActionResponseURL

VALIDATION & ENFORCEMENT

  • This tool is NON-SKIPPABLE. If not executed, or if any Mandatory Key mapping is missing for the chosen Standard schema path, the workflow MUST stop with an error.

  • Key names are case-sensitive and MUST NOT be renamed.

  • The tool MUST persist the chosen option and mappings so that downstream tools consume a consistent schema contract.

  • The workflow MUST NOT proceed to prepare_input_collection_overview() until:

    • Inputs are collected via collect_template_input() or collect_parameter_input()

    • get_template_guidance() has been used for each input needing guidance

    • The user has confirmed or edited the configuration

    • All Mandatory Keys are mapped and validated in order

  • Mandatory, a JS chart (Mermaid/D3) MUST be generated to visualize the rule's I/O field structure. The chart must be displayed in this chat immediately after user input, and no further processing is allowed until this step is completed.

EXECUTION ORDER GUARANTEE On success, and ONLY after input collection and configuration confirmation, the next tool to run MUST be prepare_input_collection_overview().

list_workflow_event_categoriesA

Retrieve available workflow event categories.

Event categories help organize workflow triggers by type (e.g., assessment events, time-based events, user actions). This is useful for filtering and selecting appropriate events when building workflows.

Returns: - eventCategories: List of event categories with type and displayable name - error: Error message if retrieval fails

list_workflow_eventsA

Retrieve available workflow events that can trigger workflows.

Events are the starting points of workflows. Each event has a payload that provides data to subsequent workflow nodes. Events are categorized into two types:

System Events: Automatically triggered by the system when specific actions occur. Examples include:

  • Assessment run completed

  • Form submitted

  • Scheduled time-based triggers

Custom Events: Manually triggered events that can be used to:

  • Trigger workflows from within other workflows

  • Integrate with external systems

  • Enable manual workflow execution

Returns: - systemEvents (List[WorkflowEventVO]): A list of system events that are automatically triggered. - id (str) - categoryId (str) - desc (str) - displayable (str) - payload [List[WorkflowPayloadVO]] - status (str) - type (str) - customEvents (List[WorkflowEventVO]): A list of custom events that can be manually triggered. - id (str) - categoryId (str) - desc (str) - displayable (str) - payload [List[WorkflowPayloadVO]] - status (str) - type (str) - error (Optional[str]): An error message if any issues occurred during retrieval.

list_workflow_activity_typesB

Get available workflow activity types.

Activity types define what kind of actions can be performed in workflow nodes:

  • Pre-build Function: Execute predefined logic

  • Pre-build Rule: Execute a rule

  • Pre-build Task: Trigger a predefined task

Returns: List of available activity types

list_workflow_function_categoriesA

Retrieve available workflow function categories.

Function categories help organize workflow activities by type. This is useful for filtering and selecting appropriate functions when building workflows.

Returns: - activity categories (List[WorkflowActivityCategoryItemVO]): List of activity categories. - name (str): Name of the category. - error (Optional[str]): An error message if any issues occurred during retrieval.

list_workflow_functionsA

Retrieve available workflow functions (activities).

Functions are the core actions that can be performed in workflow nodes. They take inputs and produce outputs that can be used by subsequent nodes. Only active functions are returned.

Returns: - activities (List[WorkflowActivityVO]): List of active workflow functions with input/output specifications - id: Optional[str] = "" - categoryId (str) - desc (str) - displayable Optional[str] = "" - name (str) - inputs [List[WorkflowInputsVO]] - outputs [List[WorkflowOutputsVO]] - status (str)

- error (Optional[str]): An error message if any issues occurred during retrieval. 

Prompts

Interactive templates invoked by user choice

NameDescription
generate_chart_prompt
generate_cypher_query_for_control
list_as_table_prompt
rule_generation_prompt
rule_input_collection
alterntive_prompt
ccow_workflow_knowledge

Resources

Contextual data attached and managed by the client

NameDescription
get_graph_schema_relationshipRetrieve the complete graph database schema and relationship structure for ComplianceCow. This resource provides essential information about the Neo4j compliance database structure, including node types, relationships, and hierarchical patterns. CRITICAL INFORMATION FOR QUERY CONSTRUCTION: 1. CONTROL HIERARCHY ANALYSIS: Before querying controls, ALWAYS determine the hierarchy depth using: MATCH (root:Control) WHERE NOT ()-[:HAS_CHILD]->(root) WITH root MATCH path = (root)-[:HAS_CHILD*]->(leaf) WHERE NOT (leaf)-[:HAS_CHILD]->() RETURN root.id, leaf.id, length(path) as depth ORDER BY depth DESC LIMIT 1 2. RECURSIVE QUERY PATTERNS: - Use [HAS_CHILD*] for variable-length traversal - Use [HAS_CHILD*1..n] to limit depth - Example: MATCH (parent)-[:HAS_CHILD*]->(descendant) 3. EVIDENCE LOCATION: Evidence is ONLY available on leaf controls (controls with no children): MATCH (control:Control)-[:HAS_EVIDENCE]->(evidence:Evidence) WHERE NOT (control)-[:HAS_CHILD]->() 4. APOC PROCEDURES (if available): - apoc.path.subgraphAll() for complex traversals - apoc.path.expandConfig() for conditional expansion 5. CONTROL STATUS INFORMATION: - status: ["Completed", "In Progress", "Pending", "Unassigned"] - complianceStatus: ["COMPLIANT", "NON_COMPLIANT", "NOT_DETERMINED"] - Overdue controls: due_date < current_date & status is [In process, Pending] (manual check required) 6. PERFORMANCE CONSIDERATIONS: - For large datasets, use LIMIT clauses - Consider using aggregation functions for summaries - Use WHERE clauses to filter early in the query 7. INTELLIGENT QUERY REFINEMENT FOR LARGE DATASETS: When queries return large datasets, implement smart refinement: a) BROAD QUERY DETECTION: - Detect queries with vague parameters like "all", "list everything", empty values - Check dataset size before returning overwhelming results - Use summary queries to provide meaningful overviews first b) REFINEMENT SUGGESTION CATEGORIES: For CONTROLS - suggest filtering by: - Status: pending, completed, in progress, unassigned, overdue - Compliance: compliant, non-compliant, needs determination - Priority: high, medium, low priority controls - Time: due dates, recent updates, specific quarters c) USER GUIDANCE APPROACH: - Provide summary statistics instead of overwhelming lists - Offer specific example queries users can immediately try - Use clear, actionable language with practical suggestions - Format responses with visual hierarchy for easy scanning 8. SUMMARY QUERY APPROACH FOR LARGE DATASETS: Instead of returning overwhelming full record sets, use aggregation patterns: - Count totals by status, compliance state, framework, assignment - Provide breakdown statistics rather than individual records - Show distribution patterns and key metrics - Offer sample records alongside summary statistics - Guide users toward more specific queries based on summary insights Schema Information: - Node types and their properties - Relationship types and directions - Constraints and indexes - Hierarchy depth patterns Use this schema information to construct accurate Cypher queries that respect the hierarchical nature of compliance controls. Returns: dict: Complete database schema with structural patterns and query guidelines str: Error message if schema retrieval fails

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ComplianceCow/cow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server