Skip to main content
Glama
ComplianceCow

ComplianceCow MCP Server

get_rules_summary

Analyze use cases against the rule catalog to prevent duplicate rule creation by identifying existing matches or cross-platform equivalents, and detect incomplete local rules to resume instead of creating new ones.

Instructions

Tool-based version of get_rules_summary for improved compatibility and prevention of duplicate rule creation.

This tool serves as the initial step in the rule creation process. It helps determine whether the user's proposed use case matches any existing rule in the catalog.

PURPOSE:

  • To analyze the user's use case and avoid duplicate rule creation by identifying the most suitable existing rule based on its name, description, and purpose.

  • NEW: Check for partially developed rules in local system before allowing new rule creation

  • NEW: Present resumption options if incomplete rules are found to prevent duplicate work

WHEN TO USE:

  • As the first step before initiating a new rule creation process

  • When the user wants to retrieve and review all available rules in the catalog

  • When verifying if a similar rule already exists that can be reused or customized

  • NEW: When checking for incomplete local rules that should be resumed instead of creating new ones

🚫 DO NOT USE THIS TOOL FOR:

  • Checking what rules are available in the ComplianceCow system.

  • This tool only works with the rule catalog (not the entire ComplianceCow system).

  • The catalog contains only rules that are published and available for reuse in the catalog.

  • For direct ComplianceCow system lookups, use dedicated system tools instead:

  • fetch_cc_rule_by_name

  • fetch_cc_rule_by_id

WHAT IT DOES:

  • Retrieves the full list of rules from the catalog with simplified metadata (name, purpose, description)

  • Performs intelligent matching using metadata (name, description, purpose) with user-provided use case details

  • Uses semantic pattern recognition to find similar rules, even across different systems (e.g., AzureUserUnusedPermission vs SalesforceUserUnusedPermissions)

IF A MATCHING RULE IS FOUND:

  • Retrieves complete details via fetch_rule().

  • If the readmeData field is available in the fetch_rule() response, Performs README-based validation using the readmeData field from the fetch_rule() response to assess its suitability for the user’s use case.

  • If suitable:

  • Returns the rule with full metadata, explanation, and the analysis report.

  • If not suitable:

  • Informs the user that the rule's README content does not align with the intended use case.

  • Prompts the user with clear next-step options:

    • "The rule's README content does not align with your use case. Please choose one of the following options:"

    • Customize the existing rule

    • Evaluate alternative matching rules

    • Proceed with new rule creation

  • Waits for the user's choice before proceeding.

IF A SIMILAR RULE EXISTS FOR AN ALTERNATE TECHNOLOGY STACK:

  • Detects rules with the same logic but built for a different platform or system (e.g., AzureUserUnusedPermission for SalesforceUserUnusedPermissions)

  • If the readmeData field is available in the fetch_rule() response, Retrieves and analyzes the readmeData from the fetch_rule() response to compare the implementation details against the user's proposed use case

  • Based on the comparison:

    • If the README content matches or is mostly reusable, suggest using the existing rule structure and logic as a foundation to create a new rule tailored to the user's target system

    • If the README content does not match or is not suitable, clearly inform the user and recommend either modifying the logic significantly or proceeding with a completely new rule from scratch

IF NO SUITABLE RULE IS FOUND:

  • Clearly informs the user that no relevant rule matches the proposed use case

  • Suggests continuing with new rule creation

  • Optionally highlights similar rules that can be used as a reference

MANDATORY STEPS: README VALIDATION:

  • Always retrieve and analyze readmeData from fetch_rule().

  • Ensure the rule's logic, behavior, and intended use align with the user's proposed use case.

README ANALYSIS REPORT:

  • Generate a clear and concise report for each readmeData analysis that classifies the result as a full match, partially reusable, or not aligned.

  • Present this report to the user for review.

USER CONFIRMATION BEFORE PROCEEDING: When analyzing a README file:

  • If no relevant rule matches the proposed use case, or if the README is deemed unsuitable, the tool must pause and request explicit user confirmation before proceeding further.

  • The tool should:

  • Clearly inform the user that no matching rule was found or the README is not appropriate.

  • Suggest creating a new rule as the next step.

  • Optionally recommend similar existing rules that can serve as references to help the user craft the new rule.

ITERATE UNTIL MATCH:

  • Repeat the above steps until a suitable rule is found or all options are exhausted.

CROSS-PLATFORM RULE HANDLING:

  • For rules from a different stack:

  • If reusable: suggest customization

  • If not reusable: recommend new rule creation

Returns:

  • A single rule object with full metadata and verified README match β€” if an exact match is found

  • A similar rule suggestion with customization options β€” if a cross-system match is found (e.g., AzureUserUnusedPermission vs SalesforceUserUnusedPermissions)

  • A message indicating no suitable rule found β€” with next steps and guidance to create a new rule

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Extensively details internal behavior including semantic pattern recognition, readmeData validation logic, cross-platform rule handling, and mandatory iteration steps beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While well-structured with clear headers, the description is excessively verbose with repetitive phrasing (e.g., multiple mentions of `fetch_rule()` and `readmeData`) that could be condensed without losing meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensively covers all workflow branches (exact match, cross-platform similarity, no match) and mandatory validation steps required for the complex rule-matching workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters; description correctly does not fabricate parameter semantics, meeting the baseline for this case.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states the tool analyzes use cases to prevent duplicate rule creation and distinguishes clearly from siblings like `fetch_cc_rule_by_name` and `fetch_cc_rule_by_id`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'WHEN TO USE' and 'DO NOT USE THIS TOOL FOR' sections with clear alternatives provided for system-level lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ComplianceCow/cow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server