Skip to main content
Glama

Find Uncovered Failure Areas

find_uncovered_failure_areas

Identifies high-risk code areas by cross-referencing low test coverage with recent test failures to prioritize testing improvements.

Instructions

Find areas of code that have both low coverage AND test failures.

This cross-references test failures with coverage data to identify high-risk areas in your codebase that need attention. Files are ranked by a "risk score" calculated as: (100 - coverage%) × failureCount.

When using a user API Key (gaf_), you must provide a projectId. Use list_projects first to find available project IDs.

Parameters:

  • projectId: The project to analyze (required)

  • days: Analysis period for test failures (default: 30)

  • coverageThreshold: Include files below this coverage % (default: 80)

Returns:

  • List of risk areas sorted by risk score (highest risk first)

  • Each area includes: file path, coverage %, failure count, risk score, test names

Use this to prioritize which parts of your codebase need better test coverage.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject ID to analyze. Required. Use list_projects to find project IDs.
daysNoNumber of days to analyze for test failures (default: 30)
coverageThresholdNoInclude files with coverage below this percentage (default: 80)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageNo
riskAreasYes
hasCoverageYes
hasTestResultsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: the ranking algorithm ('risk score calculated as: (100 - coverage%) × failureCount'), authentication requirement for user API keys, and the tool's output structure. However, it doesn't mention rate limits, pagination, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: purpose first, then algorithm, authentication guidance, parameters, returns, and usage recommendation. Every sentence adds value with zero waste. The bulleted format enhances readability without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (risk analysis combining two metrics), the description provides complete context: purpose, algorithm, authentication requirements, parameter overview, and return format. With an output schema available, it doesn't need to detail return values. The description adequately compensates for the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value beyond the schema - it mentions the parameters but doesn't provide additional context about their interaction or effects beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find areas of code that have both low coverage AND test failures' and 'cross-references test failures with coverage data to identify high-risk areas.' It distinguishes from siblings like 'get_untested_files' (coverage only) or 'get_failure_clusters' (failures only) by combining both metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'When using a user API Key (gaf_), you must provide a projectId. Use list_projects first to find available project IDs.' It also implicitly distinguishes from siblings by focusing on risk prioritization rather than raw data retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gaffer-sh/mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server