search_log_file
Search log files using regex patterns to find specific entries with surrounding context for debugging and troubleshooting.
Instructions
Searches a log file using regex pattern and returns matching lines with surrounding context. Supports pagination of results.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| filename | Yes | Name of the log file to search | |
| pattern | Yes | Regex pattern to search for | |
| context_lines | No | Number of lines to show before and after each match (default: 2, max: 10) | |
| case_sensitive | No | Whether the search should be case-sensitive (default: false) | |
| max_matches | No | Maximum number of matches to return (default: 50, max: 500) | |
| skip_matches | No | Number of matches to skip (for pagination, default: 0) |
Implementation Reference
- log_mcp/server.py:401-534 (handler)The handler function within call_tool that executes the search_log_file tool. It validates inputs, resolves the log file, performs regex search on lines, extracts matches with configurable context, supports pagination via skip_matches and max_matches, and formats the output with line numbers and markers.elif name == "search_log_file": filename = arguments.get("filename") pattern = arguments.get("pattern") context_lines = arguments.get("context_lines", 2) case_sensitive = arguments.get("case_sensitive", False) max_matches = arguments.get("max_matches", 50) skip_matches = arguments.get("skip_matches", 0) if not filename: return [TextContent( type="text", text="Error: filename parameter is required" )] if not pattern: return [TextContent( type="text", text="Error: pattern parameter is required" )] # Validate parameters if context_lines < 0 or context_lines > 10: return [TextContent( type="text", text="Error: context_lines must be between 0 and 10" )] if max_matches < 1 or max_matches > 500: return [TextContent( type="text", text="Error: max_matches must be between 1 and 500" )] if skip_matches < 0: return [TextContent( type="text", text="Error: skip_matches must be >= 0" )] try: log_dir, log_file = resolve_log_file(filename) except ValueError as e: return [TextContent( type="text", text=f"Error: {e}" )] if not log_file.exists(): return [TextContent( type="text", text=f"Log file does not exist: {log_file}" )] if not log_file.is_file(): return [TextContent( type="text", text=f"Path exists but is not a file: {log_file}" )] # Compile regex pattern try: flags = 0 if case_sensitive else re.IGNORECASE regex = re.compile(pattern, flags) except re.error as e: return [TextContent( type="text", text=f"Error: Invalid regex pattern: {e}" )] try: with open(log_file, 'r') as f: lines = f.readlines() total_lines = len(lines) # Find all matches matches = [] for i, line in enumerate(lines): if regex.search(line): matches.append(i) total_matches = len(matches) if total_matches == 0: return [TextContent( type="text", text=f"No matches found for pattern: {pattern}" )] # Apply pagination paginated_matches = matches[skip_matches:skip_matches + max_matches] if not paginated_matches: return [TextContent( type="text", text=f"No more matches (total: {total_matches}, skipped: {skip_matches})" )] result = f"File: {log_file}\n" result += f"Pattern: {pattern}\n" result += f"Total matches: {total_matches}\n" result += f"Showing matches {skip_matches + 1}-{skip_matches + len(paginated_matches)}\n" result += f"Context lines: {context_lines}\n" result += f"\n{'=' * 60}\n\n" for match_idx in paginated_matches: # Calculate context range start = max(0, match_idx - context_lines) end = min(total_lines, match_idx + context_lines + 1) # Show context for i in range(start, end): line_num = i + 1 marker = ">>>" if i == match_idx else " " result += f"{marker} {line_num:6d} | {lines[i]}" result += f"\n{'-' * 60}\n\n" if skip_matches + len(paginated_matches) < total_matches: remaining = total_matches - (skip_matches + len(paginated_matches)) result += f"... {remaining} more matches available (use skip_matches={skip_matches + len(paginated_matches)}) ..." return [TextContent(type="text", text=result)] except PermissionError: return [TextContent( type="text", text=f"Permission denied reading: {log_file}" )] except Exception as e: return [TextContent( type="text", text=f"Error searching file: {e}" )]
- log_mcp/server.py:189-226 (registration)Registration of the search_log_file tool in the list_tools() method, providing the tool's name, description, and input schema for MCP protocol.Tool( name="search_log_file", description="Searches a log file using regex pattern and returns matching lines with surrounding context. Supports pagination of results.", inputSchema={ "type": "object", "properties": { "filename": { "type": "string", "description": "Name of the log file to search" }, "pattern": { "type": "string", "description": "Regex pattern to search for" }, "context_lines": { "type": "integer", "description": "Number of lines to show before and after each match (default: 2, max: 10)", "default": 2 }, "case_sensitive": { "type": "boolean", "description": "Whether the search should be case-sensitive (default: false)", "default": False }, "max_matches": { "type": "integer", "description": "Maximum number of matches to return (default: 50, max: 500)", "default": 50 }, "skip_matches": { "type": "integer", "description": "Number of matches to skip (for pagination, default: 0)", "default": 0 } }, "required": ["filename", "pattern"] } )
- log_mcp/server.py:192-225 (schema)Input schema definition for the search_log_file tool, specifying parameters like filename, pattern, context_lines, case_sensitive, max_matches, and skip_matches with types, descriptions, defaults, and required fields.inputSchema={ "type": "object", "properties": { "filename": { "type": "string", "description": "Name of the log file to search" }, "pattern": { "type": "string", "description": "Regex pattern to search for" }, "context_lines": { "type": "integer", "description": "Number of lines to show before and after each match (default: 2, max: 10)", "default": 2 }, "case_sensitive": { "type": "boolean", "description": "Whether the search should be case-sensitive (default: false)", "default": False }, "max_matches": { "type": "integer", "description": "Maximum number of matches to return (default: 50, max: 500)", "default": 50 }, "skip_matches": { "type": "integer", "description": "Number of matches to skip (for pagination, default: 0)", "default": 0 } }, "required": ["filename", "pattern"] }
- log_mcp/server.py:37-69 (helper)Helper function resolve_log_file used by the search_log_file handler to safely resolve the log file path within configured log directories.def resolve_log_file(filename: str) -> tuple[Path, Path]: """ Resolve a filename to a full path within allowed directories. Returns: (log_dir, log_file) tuple Raises: ValueError if file is not found or not in allowed directories """ directories = get_log_directories() # If filename is already an absolute path, validate it's in allowed dirs file_path = Path(filename) if file_path.is_absolute(): try: resolved = file_path.resolve() for log_dir in directories: if str(resolved).startswith(str(log_dir.resolve())): return log_dir, resolved except Exception: pass raise ValueError(f"File not in any allowed log directory: {filename}") # Try to find the file in each directory for log_dir in directories: log_file = log_dir / filename if log_file.exists(): return log_dir, log_file.resolve() # If not found, use the first directory (for error messages) if directories: return directories[0], (directories[0] / filename).resolve() raise ValueError("No log directories configured")