Skip to main content
Glama

get_enhanced_test_coverage_with_rules

Analyze test coverage using configurable rules to validate implementation quality and identify gaps in test cases.

Instructions

🔍 Enhanced test coverage analysis with configurable rules validation and quality scoring

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_keyNoProject key (auto-detected from case_key if not provided)
case_keyYesTest case key (e.g., 'ANDROID-6')
implementation_contextYesActual implementation details (code snippets, file paths, or implementation description)
analysis_scopeNoScope of analysis: steps, assertions, data coverage, or full analysisfull
output_formatNoOutput format: chat response, markdown file, detailed analysis, or all formatsdetailed
include_recommendationsNoInclude improvement recommendations
validate_against_rulesNoValidate coverage against configured rules
show_framework_detectionNoShow detected framework and patterns
include_suite_hierarchyNoInclude featureSuiteId and rootSuiteId in analysis
file_pathNoFile path for adding code comments or saving markdown (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. While it mentions 'analysis', 'validation', and 'scoring', it doesn't clarify whether this is a read-only operation, if it modifies data, what permissions are required, or what the output looks like (beyond format options). For a complex 10-parameter tool with no annotation coverage, this leaves significant behavioral gaps unaddressed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. The emoji adds visual distinction without being distracting. Every word contributes to understanding the tool's enhanced nature. It could potentially benefit from a second sentence about output characteristics, but as-is it's appropriately concise for the complexity level.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 10-parameter analysis tool with no annotations and no output schema, the description is insufficiently complete. It doesn't explain what 'enhanced' means relative to basic coverage tools, what 'quality scoring' entails, what rules are validated against, or what the analysis output contains. The user must infer these critical details from parameter names alone, which is inadequate for proper tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 10 parameters thoroughly with descriptions, enums, defaults, and requirements. The description adds no specific parameter information beyond what's in the schema - it doesn't explain relationships between parameters (e.g., how 'analysis_scope' affects 'validate_against_rules') or provide usage examples. The baseline 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'enhanced test coverage analysis with configurable rules validation and quality scoring', which is a specific verb+resource combination. It distinguishes itself from siblings like 'get_test_coverage_by_test_case_steps_by_key' by emphasizing 'enhanced' analysis with rules validation. However, it doesn't explicitly differentiate from other analysis tools like 'analyze_test_failure' or 'detailed_analyze_launch_failures' beyond the coverage focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools focused on test analysis (e.g., 'analyze_test_failure', 'get_test_coverage_by_test_case_steps_by_key'), there's no indication of when this 'enhanced' analysis is preferred, what prerequisites exist, or when other tools might be more appropriate. The description assumes the user already knows the context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maksimsarychau/mcp-zebrunner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server