analyze_tests
Analyze the most recent test run to identify failures and generate detailed insights or summaries for efficient debugging and issue resolution.
Instructions
Analyze the most recent test run and provide detailed information about failures.
Args:
summary_only: Whether to return only a summary of the test results
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| summary_only | No |
Implementation Reference
- The handler function for the analyze_tests tool, including the @mcp.tool() decorator for automatic registration. This is the core implementation that reads the test log file, parses its contents using analyze_pytest_log_content, adds metadata, and returns the analysis dictionary.@mcp.tool() async def analyze_tests(summary_only: bool = False) -> dict[str, Any]: """Analyze the most recent test run and provide detailed information about failures. Args: summary_only: Whether to return only a summary of the test results """ logger.info("Analyzing test results (summary_only=%s)...", summary_only) log_file = test_log_file if not os.path.exists(log_file): error_msg = f"Test log file not found at: {log_file}. Please run tests first." logger.error(error_msg) return {"error": error_msg, "summary": {"status": "ERROR", "passed": 0, "failed": 0, "skipped": 0}} try: with open(log_file, encoding="utf-8", errors="ignore") as f: log_contents = f.read() if not log_contents.strip(): error_msg = f"Test log file is empty: {log_file}" logger.warning(error_msg) return {"error": error_msg, "summary": {"status": "EMPTY", "passed": 0, "failed": 0, "skipped": 0}} analysis = analyze_pytest_log_content(log_contents, summary_only=summary_only) # Add metadata similar to the old analyze_test_log function log_time = datetime.fromtimestamp(os.path.getmtime(log_file)) time_elapsed = (datetime.now() - log_time).total_seconds() / 60 # minutes analysis["log_file"] = log_file analysis["log_timestamp"] = log_time.isoformat() analysis["log_age_minutes"] = round(time_elapsed, 1) # The analyze_pytest_log_content already returns a structure including 'overall_summary'. # If summary_only is true, it returns only that. Otherwise, it returns more details. # We can directly return this analysis dictionary. # Ensure there's always a summary structure for consistent access, even if minimal if "overall_summary" not in analysis: analysis["overall_summary"] = {"status": "UNKNOWN", "passed": 0, "failed": 0, "skipped": 0} if "summary" not in analysis: # for backward compatibility or general access analysis["summary"] = analysis["overall_summary"] logger.info( "Test log analysis completed using test_log_parser. Summary status: %s", analysis.get("summary", {}).get("status"), ) return analysis except Exception as e: # pylint: disable=broad-exception-caught error_msg = f"Error analyzing test log file with test_log_parser: {e}" logger.error(error_msg, exc_info=True) return {"error": error_msg, "summary": {"status": "ERROR", "passed": 0, "failed": 0, "skipped": 0}}
- Pydantic BaseModel defining the input schema for the analyze_tests tool, specifying the summary_only parameter.class AnalyzeTestsInput(BaseModel): """Parameters for analyzing tests.""" summary_only: bool = Field(default=False, description="Whether to return only a summary of the test results")
- src/log_analyzer_mcp/log_analyzer_mcp_server.py:176-176 (registration)The @mcp.tool() decorator registers the analyze_tests function as an MCP tool.@mcp.tool()