Skip to main content
Glama
ComplianceCow

ComplianceCow MCP Server

fetch_leaf_controls_of_an_assessment

Retrieve leaf-level controls from a compliance assessment to access control IDs, aliases, activation status, and associated rule names.

Instructions

To fetch the only the leaf controls for a given assessment. If assessment_id is not provided use other tools to get the assessment and its id.

Args: - assessment_id (str, required): Assessment id or plan id.

Returns: - controls (List[AutomatedControlVO]): List of controls - id (str): Control ID. - displayable (str): Displayable name or label. - alias (str): Alias of the control. - activationStatus (str): Activation status. - ruleName (str): Associated rule name. - assessmentId (str): Assessment identifier. - error (Optional[str]): An error message if any issues occurred during retrieval.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
assessment_idNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It compensates well by detailing the return structure (List[AutomatedControlVO] with specific fields) and mentioning error handling. However, it lacks disclosure about pagination, rate limits, side effects, or what constitutes a 'leaf' control versus parent controls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The docstring format with Args and Returns sections is well-structured and front-loaded. The Returns section provides valuable field-level detail that compensates for the missing output schema. Every sentence earns its place, though the typo 'the only the' in the first sentence slightly detracts from professionalism.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Considering the complete absence of annotations and output schema, plus 0% parameter coverage, the description provides adequate context through detailed return value documentation. However, it remains incomplete regarding domain terminology (defining 'leaf controls') and operational constraints (pagination, filtering behavior, maximum result limits).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description effectively compensates by specifying the parameter type (str), labeling it as required, and clarifying semantics ('Assessment id or plan id'). The note that a 'plan id' is also acceptable adds significant value beyond the raw schema. Note: There's a minor inconsistency where the description marks it required but the schema provides a default empty string.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (fetch) and resource (leaf controls) with scope (for a given assessment). It distinguishes from siblings like 'fetch_controls' by specifying 'leaf controls'. However, it contains a typo ('the only the'), lacks explanation of what 'leaf controls' means in the domain hierarchy, and doesn't clarify how this differs from 'fetch_automated_controls_of_an_assessment'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides prerequisite guidance ('If assessment_id is not provided use other tools'), which helps with invocation sequencing. However, it fails to specify when to use this tool versus similar siblings like 'fetch_automated_controls_of_an_assessment' or 'fetch_assessment_run_leaf_controls', or what makes 'leaf' controls distinct from other control types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ComplianceCow/cow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server