Skip to main content
Glama
djm81
by djm81

search_log_all_records

Retrieve and filter all log records by scope and content patterns, with contextual data before and after each match, using Log Analyzer MCP.

Instructions

Search for all log records, optionally filtering by scope and content patterns, with context.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
context_afterNo
context_beforeNo
log_content_patterns_overrideNo
log_dirs_overrideNo
scopeNodefault

Implementation Reference

  • The MCP tool handler for 'search_log_all_records'. Handles input parameters via Pydantic validation from signature, builds filter criteria using build_filter_criteria from common.utils, instantiates AnalysisEngine, and delegates to its search_logs method to perform the actual log search and return results.
    @mcp.tool() async def search_log_all_records( scope: str = "default", context_before: int = 2, context_after: int = 2, log_dirs_override: str = "", log_content_patterns_override: str = "", ) -> list[dict[str, Any]]: """Search for all log records, optionally filtering by scope and content patterns, with context.""" # Forcing re-initialization of analysis_engine for debugging module caching. # Pass project_root_for_config=None to allow AnalysisEngine to determine it. current_analysis_engine = AnalysisEngine(logger_instance=logger, project_root_for_config=None) print( f"DEBUG_MCP_TOOL_SEARCH_ALL: Entered search_log_all_records with log_dirs_override='{log_dirs_override}'", file=sys.stderr, flush=True, ) logger.info( "MCP search_log_all_records called with scope='%s', context=%sB/%sA, " "log_dirs_override='%s', log_content_patterns_override='%s'", scope, context_before, context_after, log_dirs_override, log_content_patterns_override, ) log_dirs_list = log_dirs_override.split(",") if log_dirs_override else None log_content_patterns_list = log_content_patterns_override.split(",") if log_content_patterns_override else None filter_criteria = build_filter_criteria( scope=scope, context_before=context_before, context_after=context_after, log_dirs_override=log_dirs_list, log_content_patterns_override=log_content_patterns_list, ) try: results = await asyncio.to_thread(current_analysis_engine.search_logs, filter_criteria) logger.info("search_log_all_records returning %s records.", len(results)) return results except Exception as e: # pylint: disable=broad-exception-caught logger.error("Error in search_log_all_records: %s", e, exc_info=True) custom_message = f"Failed to search all logs: {e!s}" raise McpError(ErrorData(code=-32603, message=custom_message)) from e
  • Pydantic models BaseSearchInput (base schema for search tool inputs) and SearchLogAllInput (specific subclass for search_log_all_records). Note: FastMCP uses function signature for validation, these models document the expected inputs.
    class BaseSearchInput(BaseModel): """Base model for common search parameters.""" scope: str = Field(default="default", description="Logging scope to search within (from .env scopes or default).") context_before: int = Field(default=2, description="Number of lines before a match.", ge=0) context_after: int = Field(default=2, description="Number of lines after a match.", ge=0) log_dirs_override: str = Field( default="", description="Comma-separated list of log directories, files, or glob patterns (overrides .env for file locations).", ) log_content_patterns_override: str = Field( default="", description="Comma-separated list of REGEX patterns for log messages (overrides .env content filters).", ) class SearchLogAllInput(BaseSearchInput): """Input for search_log_all_records."""
  • Core implementation of log searching logic in AnalysisEngine.search_logs method. Determines target log files, parses log lines, applies content/level filters, time-based filters, positional limits (first_n/last_n), extracts context lines, and returns structured log entries. This is the primary helper invoked by the tool handler.
    def search_logs(self, filter_criteria: Dict[str, Any]) -> List[Dict[str, Any]]: """ Main method to search logs based on various criteria. filter_criteria is a dictionary that can contain: - log_dirs_override: List[str] (paths/globs to search instead of config) - scope: str (e.g., "mcp", "runtime" to use predefined paths from config) - log_content_patterns_override: List[str] (regexes for log message content) - level_filter: str (e.g., "ERROR", "WARNING") - time_filter_type: str ("minutes", "hours", "days") - maps to minutes, hours, days keys - time_filter_value: int (e.g., 30 for 30 minutes) - maps to minutes, hours, days values - positional_filter_type: str ("first_n", "last_n") - maps to first_n, last_n keys - positional_filter_value: int (e.g., 10 for first 10 records) - maps to first_n, last_n values - context_before: int (lines of context before match) - context_after: int (lines of context after match) """ self.logger.info(f"[AnalysisEngine.search_logs] Called with filter_criteria: {filter_criteria}") all_raw_lines_by_file: Dict[str, List[str]] = {} parsed_entries: List[ParsedLogEntry] = [] # 1. Determine target log files target_files = self._get_target_log_files( scope=filter_criteria.get("scope"), log_dirs_override=filter_criteria.get("log_dirs_override"), ) if not target_files: self.logger.info( "[AnalysisEngine.search_logs] No log files found by _get_target_log_files. Returning pathway OK message." ) # Return a specific message indicating pathway is okay but no files found return [{"message": "No target files found, but pathway OK."}] self.logger.info(f"[AnalysisEngine.search_logs] Target files found: {target_files}") # 2. Parse all lines from target files for file_path in target_files: try: with open(file_path, "r", encoding="utf-8", errors="ignore") as f: lines = f.readlines() # Store all lines for context extraction later all_raw_lines_by_file[file_path] = [ line.rstrip("\\n") for line in lines ] # Store raw lines as they are for i, line_content in enumerate(lines): entry = self._parse_log_line(line_content.strip(), file_path, i + 1) # line_number is 1-indexed if entry: parsed_entries.append(entry) except Exception as e: # pylint: disable=broad-exception-caught self.logger.error(f"Error reading or parsing file {file_path}: {e}", exc_info=True) continue # Continue with other files self.logger.info(f"[AnalysisEngine.search_logs] Parsed {len(parsed_entries)} entries from all target files.") if not parsed_entries: self.logger.info("[AnalysisEngine.search_logs] No entries parsed from target files.") return [] # 3. Apply content filters (level and regex) filtered_entries = self._apply_content_filters(parsed_entries, filter_criteria) if not filtered_entries: self.logger.info("[AnalysisEngine.search_logs] No entries left after content filters.") return [] # 4. Apply time filters filtered_entries = self._apply_time_filters(filtered_entries, filter_criteria) if not filtered_entries: self.logger.info("[AnalysisEngine.search_logs] No entries left after time filters.") return [] # 5. Apply positional filters (first_n, last_n) # Note: _apply_positional_filters sorts by timestamp and handles entries without timestamps filtered_entries = self._apply_positional_filters(filtered_entries, filter_criteria) if not filtered_entries: self.logger.info("[AnalysisEngine.search_logs] No entries left after positional filters.") return [] # 6. Extract context lines for the final set of entries # Use context_before and context_after from filter_criteria, or defaults from config context_before = filter_criteria.get("context_before", self.default_context_lines_before) context_after = filter_criteria.get("context_after", self.default_context_lines_after) final_entries_with_context = self._extract_context_lines( filtered_entries, all_raw_lines_by_file, context_before, context_after ) self.logger.info(f"[AnalysisEngine.search_logs] Returning {len(final_entries_with_context)} processed entries.") # The tool expects a list of dicts, and ParsedLogEntry is already a Dict[str, Any] return final_entries_with_context
  • Utility function build_filter_criteria that constructs the filter_criteria dictionary from tool parameters, which is then passed to AnalysisEngine.search_logs.
    def build_filter_criteria( scope: Optional[str] = None, context_before: Optional[int] = None, context_after: Optional[int] = None, log_dirs_override: Optional[List[str]] = None, # Expecting list here log_content_patterns_override: Optional[List[str]] = None, # Expecting list here minutes: Optional[int] = None, hours: Optional[int] = None, days: Optional[int] = None, first_n: Optional[int] = None, last_n: Optional[int] = None, ) -> Dict[str, Any]: """Helper function to build the filter_criteria dictionary.""" criteria: Dict[str, Any] = {} if scope is not None: criteria["scope"] = scope if context_before is not None: criteria["context_before"] = context_before if context_after is not None: criteria["context_after"] = context_after if log_dirs_override is not None: # Already a list or None criteria["log_dirs_override"] = log_dirs_override if log_content_patterns_override is not None: # Already a list or None criteria["log_content_patterns_override"] = log_content_patterns_override if minutes is not None: criteria["minutes"] = minutes if hours is not None: criteria["hours"] = hours if days is not None: criteria["days"] = days if first_n is not None: criteria["first_n"] = first_n if last_n is not None: criteria["last_n"] = last_n return criteria

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/djm81/log_analyzer_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server