axom_mcp_analyze
Analyze code and data to troubleshoot issues, review quality, audit security, suggest refactoring, or assess test coverage with configurable depth and focus areas.
Instructions
Analyze code and data with configurable depth and scope.
Analysis Types:
debug: Troubleshoot issues, investigate errors, diagnose problems
review: Code review, quality assessment, best practices
audit: Security audit, compliance check, vulnerability scan
refactor: Refactoring suggestions, code improvement recommendations
test: Test coverage analysis, test generation suggestions
Focus Areas:
security: Security vulnerabilities, injection risks, auth issues
performance: Performance bottlenecks, optimization opportunities
architecture: Architectural patterns, design issues
maintainability: Code smell, complexity, documentation
Depth Levels:
minimal: Quick scan, critical issues only
low: Basic analysis, obvious issues
medium: Standard analysis (default)
high: Deep analysis, all issues
max: Exhaustive analysis, edge cases
Chain Support: Use chain parameter to automatically act on analysis results.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Analysis type | |
| target | Yes | File path or code to analyze | |
| focus | No | Focus area (e.g., security, performance) | |
| depth | No | Analysis depth level | |
| output_format | No | Output format preference | |
| chain | No | Chain operations based on results |
Implementation Reference
- src/axom_mcp/handlers/analyze.py:44-115 (handler)Main handler function for axom_mcp_analyze tool. This async function processes tool arguments, validates input using AnalyzeInput schema, handles file path vs code content detection, routes to specific analysis type handlers (debug/review/audit/refactor/test), and formats output based on output_format parameter.async def handle_analyze(arguments: Dict[str, Any]) -> str: """Handle axom_mcp_analyze tool calls. Args: arguments: Tool arguments containing analysis type and parameters Returns: JSON string with analysis result """ # Validate input input_data = AnalyzeInput(**arguments) analysis_type = input_data.type target = input_data.target focus = input_data.focus depth = input_data.depth or "medium" output_format = input_data.output_format or "summary" try: # Check if target is a file path or code content target_path = None code_content = None try: target_path = _validate_path(target) # If path exists and is a file, read it; otherwise treat as code content if target_path.exists() and target_path.is_file(): code_content = target_path.read_text(encoding="utf-8", errors="replace") else: # Path is valid but file doesn't exist - treat as code content code_content = target except ValueError: # Target is code content, not a file path code_content = target if code_content is None: return json.dumps({"error": f"Could not read target: {target}"}) # Perform analysis based on type if analysis_type == "debug": result = await _analyze_debug(code_content, focus, depth) elif analysis_type == "review": result = await _analyze_review(code_content, focus, depth) elif analysis_type == "audit": result = await _analyze_audit(code_content, focus, depth) elif analysis_type == "refactor": result = await _analyze_refactor(code_content, focus, depth) elif analysis_type == "test": result = await _analyze_test(code_content, focus, depth) else: return json.dumps({"error": f"Unknown analysis type: {analysis_type}"}) # Format output if output_format == "detailed": return json.dumps(result, indent=2) elif output_format == "actionable": return _format_actionable(result) else: return json.dumps( { "success": result.get("success", True), "type": analysis_type, "target": str(target_path) if target_path else "code", "focus": focus if focus else "general", "issues_found": result.get("issues_found", False), "summary": result.get("summary", ""), "recommendations": result.get("recommendations", []), } ) except Exception as e: logger.error(f"Analysis failed: {e}") return json.dumps({"error": str(e)})
- src/axom_mcp/handlers/analyze.py:118-165 (handler)Debug analysis handler that detects common code issues like generic exceptions, print statements, TODO/FIXME comments, bare except clauses, and empty blocks. Returns structured results with issues found and recommendations.async def _analyze_debug(code: str, focus: Optional[str], depth: str) -> Dict[str, Any]: """Perform debug analysis.""" issues = [] # Common error patterns error_patterns = [ ( r"\bException\b", "Generic exception - consider using specific exception type", ), (r"\bprint\s*\(", "Debug print statement found"), (r"\bTODO\b", "TODO comment found - may indicate incomplete code"), (r"\bFIXME\b", "FIXME comment found - indicates known issue"), (r"\bXXX\b", "XXX comment found - indicates problematic code"), (r"\bHACK\b", "HACK comment found - indicates workaround"), (r"except\s*:", "Bare except clause - catches all exceptions"), ( r"except\s+Exception\s*:", "Catches generic Exception - may hide specific errors", ), (r"pass\s*$", "Empty block - may indicate missing implementation"), ] for pattern, message in error_patterns: matches = re.finditer(pattern, code, re.IGNORECASE) for match in matches: line_num = code[: match.start()].count("\n") + 1 issues.append( { "line": line_num, "type": "debug", "message": message, "severity": "warning" if "TODO" in message or "FIXME" in message else "info", } ) return { "success": True, "type": "debug", "issues_found": len(issues) > 0, "issues": issues, "summary": f"Found {len(issues)} potential debug issues", "recommendations": [i["message"] for i in issues[:5]] if issues else ["No debug issues found"], }
- src/axom_mcp/handlers/analyze.py:168-231 (handler)Code review analysis handler that checks for missing docstrings, global variables, lambda expressions, empty blocks, and long functions (>50 lines). Returns quality issues and improvement recommendations.async def _analyze_review( code: str, focus: Optional[str], depth: str ) -> Dict[str, Any]: """Perform code review analysis.""" issues = [] # Code quality patterns quality_patterns = [ (r"^\s*def\s+\w+\s*\([^)]*\)\s*:", "Function missing docstring", "docstring"), (r"^\s*class\s+\w+\s*:", "Class missing docstring", "docstring"), (r"\bglobal\s+\w+", "Global variable usage - consider refactoring", "scope"), ( r"\blambda\s*:", "Lambda expression - consider named function for clarity", "readability", ), (r"if\s+[^:]+\s*:\s*pass", "Empty if block", "logic"), (r"for\s+[^:]+\s*:\s*pass", "Empty for loop", "logic"), (r"while\s+[^:]+\s*:\s*pass", "Empty while loop", "logic"), ] for pattern, message, category in quality_patterns: matches = re.finditer(pattern, code, re.MULTILINE) for match in matches: line_num = code[: match.start()].count("\n") + 1 issues.append( { "line": line_num, "type": category, "message": message, "severity": "info", } ) # Check for long functions function_pattern = r"def\s+(\w+)\s*\([^)]*\)\s*:" for match in re.finditer(function_pattern, code): func_name = match.group(1) func_start = match.start() # Simple heuristic: count lines until next def or end remaining = code[func_start:] next_def = re.search(r"\ndef\s+", remaining[1:]) func_content = remaining[: next_def.start() + 1] if next_def else remaining func_lines = func_content.count("\n") if func_lines > 50: issues.append( { "line": code[:func_start].count("\n") + 1, "type": "complexity", "message": f"Function '{func_name}' is {func_lines} lines - consider breaking down", "severity": "warning", } ) return { "success": True, "type": "review", "issues_found": len(issues) > 0, "issues": issues, "summary": f"Found {len(issues)} code quality issues", "recommendations": list(set(i["message"] for i in issues[:5])) if issues else ["Code looks good!"], }
- src/axom_mcp/handlers/analyze.py:234-303 (handler)Security audit analysis handler that detects dangerous patterns like eval/exec, shell=True in subprocess, hardcoded secrets (passwords, API keys, tokens), and SQL injection risks. Returns security issues with severity levels (critical/warning/info).async def _analyze_audit(code: str, focus: Optional[str], depth: str) -> Dict[str, Any]: """Perform security audit analysis.""" issues = [] # Security patterns security_patterns = [ (r"eval\s*\(", "eval() is dangerous - can execute arbitrary code", "critical"), (r"exec\s*\(", "exec() is dangerous - can execute arbitrary code", "critical"), (r"__import__\s*\(", "Dynamic import - potential security risk", "warning"), ( r"subprocess\.(call|run|Popen)\s*\([^)]*shell\s*=\s*True", "Shell=True in subprocess - command injection risk", "critical", ), (r"os\.system\s*\(", "os.system() - command injection risk", "critical"), (r"pickle\.loads?\s*\(", "pickle is unsafe for untrusted data", "warning"), (r"marshal\.loads?\s*\(", "marshal is unsafe for untrusted data", "warning"), (r"yaml\.load\s*\([^)]*\)", "yaml.load() without Loader - unsafe", "warning"), (r'password\s*=\s*["\'][^"\']+["\']', "Hardcoded password found", "critical"), (r'api_key\s*=\s*["\'][^"\']+["\']', "Hardcoded API key found", "critical"), (r'secret\s*=\s*["\'][^"\']+["\']', "Hardcoded secret found", "critical"), (r'token\s*=\s*["\'][^"\']+["\']', "Hardcoded token found", "critical"), ( r"SELECT\s+.*\+", "Potential SQL injection - string concatenation in query", "critical", ), ( r"INSERT\s+.*\+", "Potential SQL injection - string concatenation in query", "critical", ), ( r"UPDATE\s+.*\+", "Potential SQL injection - string concatenation in query", "critical", ), ( r"DELETE\s+.*\+", "Potential SQL injection - string concatenation in query", "critical", ), ] for pattern, message, severity in security_patterns: matches = re.finditer(pattern, code, re.IGNORECASE) for match in matches: line_num = code[: match.start()].count("\n") + 1 issues.append( { "line": line_num, "type": "security", "message": message, "severity": severity, } ) # Sort by severity issues.sort(key=lambda x: 0 if x["severity"] == "critical" else 1) return { "success": True, "type": "audit", "issues_found": len(issues) > 0, "issues": issues, "summary": f"Found {len(issues)} security issues ({sum(1 for i in issues if i['severity'] == 'critical')} critical)", "recommendations": [i["message"] for i in issues[:5]] if issues else ["No security issues found"], }
- src/axom_mcp/handlers/analyze.py:306-373 (handler)Refactoring analysis handler that identifies nested if/for/while loops, deep indentation, and duplicate code patterns. Returns refactoring suggestions to improve code structure.async def _analyze_refactor( code: str, focus: Optional[str], depth: str ) -> Dict[str, Any]: """Perform refactoring analysis.""" suggestions = [] # Refactoring patterns refactor_patterns = [ ( r"(\bif\s+[^:]+\s*:\s*\n\s*)(if\s+)", "Nested if statements - consider combining conditions", ), ( r"(\bfor\s+[^:]+\s*:\s*\n\s*)(for\s+)", "Nested loops - consider extracting method", ), ( r"(\bwhile\s+[^:]+\s*:\s*\n\s*)(while\s+)", "Nested while loops - consider extracting method", ), (r"^\s{8,}", "Deep indentation - consider extracting method", "indentation"), ] for pattern, message in refactor_patterns: matches = re.finditer(pattern, code, re.MULTILINE) for match in matches: line_num = code[: match.start()].count("\n") + 1 suggestions.append( { "line": line_num, "type": "refactor", "message": message, "severity": "info", } ) # Check for duplicate code (simple heuristic) lines = code.split("\n") line_counts = {} for i, line in enumerate(lines): stripped = line.strip() if stripped and len(stripped) > 10: if stripped in line_counts: line_counts[stripped].append(i + 1) else: line_counts[stripped] = [i + 1] for line_text, occurrences in line_counts.items(): if len(occurrences) > 2: suggestions.append( { "line": occurrences[0], "type": "duplicate", "message": f"Code appears to be duplicated on lines: {occurrences}", "severity": "info", } ) return { "success": True, "type": "refactor", "issues_found": len(suggestions) > 0, "issues": suggestions, "summary": f"Found {len(suggestions)} refactoring opportunities", "recommendations": [s["message"] for s in suggestions[:5]] if suggestions else ["Code structure looks good!"], }
- src/axom_mcp/handlers/analyze.py:376-426 (handler)Test coverage analysis handler that detects test functions, assertions, pytest/unittest decorators and imports. Warns when no tests are found. Returns test indicators and coverage recommendations.async def _analyze_test(code: str, focus: Optional[str], depth: str) -> Dict[str, Any]: """Perform test coverage analysis.""" issues = [] # Test patterns test_patterns = [ (r"def\s+test_\w+\s*\(", "Test function found"), (r"assert\s+", "Assertion found"), (r"@pytest", "pytest decorator found"), (r"@unittest", "unittest decorator found"), (r"import\s+unittest", "unittest module imported"), (r"import\s+pytest", "pytest module imported"), ] test_indicators = 0 for pattern, message in test_patterns: matches = list(re.finditer(pattern, code)) if matches: test_indicators += len(matches) for match in matches: line_num = code[: match.start()].count("\n") + 1 issues.append( { "line": line_num, "type": "test", "message": message, "severity": "info", } ) # Check for missing test patterns if "def " in code and test_indicators == 0: issues.append( { "line": 1, "type": "test", "message": "No test functions found - consider adding tests", "severity": "warning", } ) return { "success": True, "type": "test", "issues_found": test_indicators == 0, "issues": issues, "summary": f"Found {test_indicators} test indicators", "recommendations": ["Add more test coverage"] if test_indicators < 3 else ["Good test coverage!"], }
- src/axom_mcp/schemas.py:235-263 (schema)AnalyzeInput Pydantic model defining the input schema for axom_mcp_analyze tool. Validates type (debug/review/audit/refactor/test), target (file path or code), focus (optional focus area), depth (minimal/low/medium/high/max), output_format (summary/detailed/actionable), and chain parameter for operation chaining.class AnalyzeInput(BaseModel): """Input schema for axom_mcp_analyze tool.""" model_config = {"extra": "forbid"} type: str = Field( ..., pattern="^(debug|review|audit|refactor|test)$", description="Analysis type" ) target: str = Field(..., min_length=1, description="File path or code to analyze") focus: Optional[str] = Field( default=None, max_length=100, description="Focus area (e.g., security, performance)", ) depth: Optional[str] = Field( default="medium", pattern="^(minimal|low|medium|high|max)$", description="Analysis depth level", ) output_format: Optional[str] = Field( default="summary", pattern="^(summary|detailed|actionable)$", description="Output format preference", ) chain: Optional[List[Dict[str, Any]]] = Field( default=None, max_length=10, description="Chain operations based on analysis results", )
- src/axom_mcp/server.py:222-283 (registration)Tool registration in TOOLS list defining axom_mcp_analyze with its description, analysis types, focus areas, depth levels, and input schema. Includes tool annotations marking it as read-only, idempotent, and non-destructive.Tool( name="axom_mcp_analyze", description="""Analyze code and data with configurable depth and scope. Analysis Types: - debug: Troubleshoot issues, investigate errors, diagnose problems - review: Code review, quality assessment, best practices - audit: Security audit, compliance check, vulnerability scan - refactor: Refactoring suggestions, code improvement recommendations - test: Test coverage analysis, test generation suggestions Focus Areas: - security: Security vulnerabilities, injection risks, auth issues - performance: Performance bottlenecks, optimization opportunities - architecture: Architectural patterns, design issues - maintainability: Code smell, complexity, documentation Depth Levels: - minimal: Quick scan, critical issues only - low: Basic analysis, obvious issues - medium: Standard analysis (default) - high: Deep analysis, all issues - max: Exhaustive analysis, edge cases Chain Support: Use chain parameter to automatically act on analysis results.""", inputSchema={ "type": "object", "properties": { "type": { "type": "string", "enum": ["debug", "review", "audit", "refactor", "test"], "description": "Analysis type", }, "target": { "type": "string", "description": "File path or code to analyze", }, "focus": { "type": "string", "description": "Focus area (e.g., security, performance)", }, "depth": { "type": "string", "enum": ["minimal", "low", "medium", "high", "max"], "description": "Analysis depth level", }, "output_format": { "type": "string", "enum": ["summary", "detailed", "actionable"], "description": "Output format preference", }, "chain": { "type": "array", "items": {"type": "object"}, "description": "Chain operations based on results", }, }, "required": ["type", "target"], }, annotations=TOOL_ANNOTATIONS["analyze"], ),
- src/axom_mcp/server.py:486-506 (registration)call_tool handler that routes axom_mcp_analyze tool calls to the handle_analyze function. Line 494-495 specifically handles the 'axom_mcp_analyze' case by calling await handle_analyze(arguments).@server.call_tool() async def call_tool(name: str, arguments: Dict[str, Any]) -> List[TextContent]: """Handle tool calls.""" try: if name == "axom_mcp_memory": result = await handle_memory(arguments) elif name == "axom_mcp_exec": result = await handle_exec(arguments) elif name == "axom_mcp_analyze": result = await handle_analyze(arguments) elif name == "axom_mcp_discover": result = await handle_discover(arguments) elif name == "axom_mcp_transform": result = await handle_transform(arguments) else: return [TextContent(type="text", text=f"Unknown tool: {name}")] return [TextContent(type="text", text=result)] except Exception as e: logger.error(f"Tool call failed: {name} - {e}") return [TextContent(type="text", text=f"Error: {str(e)}")]
- src/axom_mcp/server.py:27-33 (registration)Import statement bringing handle_analyze into the server module from .handlers package, enabling the tool routing in call_tool function.from .handlers import ( handle_memory, handle_exec, handle_analyze, handle_discover, handle_transform, )