Skip to main content
Glama
thhart

Log MCP Server

by thhart

search_log_file

Search log files using regex patterns to find specific entries with surrounding context for debugging and troubleshooting.

Instructions

Searches a log file using regex pattern and returns matching lines with surrounding context. Supports pagination of results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filenameYesName of the log file to search
patternYesRegex pattern to search for
context_linesNoNumber of lines to show before and after each match (default: 2, max: 10)
case_sensitiveNoWhether the search should be case-sensitive (default: false)
max_matchesNoMaximum number of matches to return (default: 50, max: 500)
skip_matchesNoNumber of matches to skip (for pagination, default: 0)

Implementation Reference

  • The handler function within call_tool that executes the search_log_file tool. It validates inputs, resolves the log file, performs regex search on lines, extracts matches with configurable context, supports pagination via skip_matches and max_matches, and formats the output with line numbers and markers.
    elif name == "search_log_file":
        filename = arguments.get("filename")
        pattern = arguments.get("pattern")
        context_lines = arguments.get("context_lines", 2)
        case_sensitive = arguments.get("case_sensitive", False)
        max_matches = arguments.get("max_matches", 50)
        skip_matches = arguments.get("skip_matches", 0)
    
        if not filename:
            return [TextContent(
                type="text",
                text="Error: filename parameter is required"
            )]
    
        if not pattern:
            return [TextContent(
                type="text",
                text="Error: pattern parameter is required"
            )]
    
        # Validate parameters
        if context_lines < 0 or context_lines > 10:
            return [TextContent(
                type="text",
                text="Error: context_lines must be between 0 and 10"
            )]
    
        if max_matches < 1 or max_matches > 500:
            return [TextContent(
                type="text",
                text="Error: max_matches must be between 1 and 500"
            )]
    
        if skip_matches < 0:
            return [TextContent(
                type="text",
                text="Error: skip_matches must be >= 0"
            )]
    
        try:
            log_dir, log_file = resolve_log_file(filename)
        except ValueError as e:
            return [TextContent(
                type="text",
                text=f"Error: {e}"
            )]
    
        if not log_file.exists():
            return [TextContent(
                type="text",
                text=f"Log file does not exist: {log_file}"
            )]
    
        if not log_file.is_file():
            return [TextContent(
                type="text",
                text=f"Path exists but is not a file: {log_file}"
            )]
    
        # Compile regex pattern
        try:
            flags = 0 if case_sensitive else re.IGNORECASE
            regex = re.compile(pattern, flags)
        except re.error as e:
            return [TextContent(
                type="text",
                text=f"Error: Invalid regex pattern: {e}"
            )]
    
        try:
            with open(log_file, 'r') as f:
                lines = f.readlines()
                total_lines = len(lines)
    
                # Find all matches
                matches = []
                for i, line in enumerate(lines):
                    if regex.search(line):
                        matches.append(i)
    
                total_matches = len(matches)
    
                if total_matches == 0:
                    return [TextContent(
                        type="text",
                        text=f"No matches found for pattern: {pattern}"
                    )]
    
                # Apply pagination
                paginated_matches = matches[skip_matches:skip_matches + max_matches]
    
                if not paginated_matches:
                    return [TextContent(
                        type="text",
                        text=f"No more matches (total: {total_matches}, skipped: {skip_matches})"
                    )]
    
                result = f"File: {log_file}\n"
                result += f"Pattern: {pattern}\n"
                result += f"Total matches: {total_matches}\n"
                result += f"Showing matches {skip_matches + 1}-{skip_matches + len(paginated_matches)}\n"
                result += f"Context lines: {context_lines}\n"
                result += f"\n{'=' * 60}\n\n"
    
                for match_idx in paginated_matches:
                    # Calculate context range
                    start = max(0, match_idx - context_lines)
                    end = min(total_lines, match_idx + context_lines + 1)
    
                    # Show context
                    for i in range(start, end):
                        line_num = i + 1
                        marker = ">>>" if i == match_idx else "   "
                        result += f"{marker} {line_num:6d} | {lines[i]}"
    
                    result += f"\n{'-' * 60}\n\n"
    
                if skip_matches + len(paginated_matches) < total_matches:
                    remaining = total_matches - (skip_matches + len(paginated_matches))
                    result += f"... {remaining} more matches available (use skip_matches={skip_matches + len(paginated_matches)}) ..."
    
                return [TextContent(type="text", text=result)]
    
        except PermissionError:
            return [TextContent(
                type="text",
                text=f"Permission denied reading: {log_file}"
            )]
        except Exception as e:
            return [TextContent(
                type="text",
                text=f"Error searching file: {e}"
            )]
  • Registration of the search_log_file tool in the list_tools() method, providing the tool's name, description, and input schema for MCP protocol.
    Tool(
        name="search_log_file",
        description="Searches a log file using regex pattern and returns matching lines with surrounding context. Supports pagination of results.",
        inputSchema={
            "type": "object",
            "properties": {
                "filename": {
                    "type": "string",
                    "description": "Name of the log file to search"
                },
                "pattern": {
                    "type": "string",
                    "description": "Regex pattern to search for"
                },
                "context_lines": {
                    "type": "integer",
                    "description": "Number of lines to show before and after each match (default: 2, max: 10)",
                    "default": 2
                },
                "case_sensitive": {
                    "type": "boolean",
                    "description": "Whether the search should be case-sensitive (default: false)",
                    "default": False
                },
                "max_matches": {
                    "type": "integer",
                    "description": "Maximum number of matches to return (default: 50, max: 500)",
                    "default": 50
                },
                "skip_matches": {
                    "type": "integer",
                    "description": "Number of matches to skip (for pagination, default: 0)",
                    "default": 0
                }
            },
            "required": ["filename", "pattern"]
        }
    )
  • Input schema definition for the search_log_file tool, specifying parameters like filename, pattern, context_lines, case_sensitive, max_matches, and skip_matches with types, descriptions, defaults, and required fields.
    inputSchema={
        "type": "object",
        "properties": {
            "filename": {
                "type": "string",
                "description": "Name of the log file to search"
            },
            "pattern": {
                "type": "string",
                "description": "Regex pattern to search for"
            },
            "context_lines": {
                "type": "integer",
                "description": "Number of lines to show before and after each match (default: 2, max: 10)",
                "default": 2
            },
            "case_sensitive": {
                "type": "boolean",
                "description": "Whether the search should be case-sensitive (default: false)",
                "default": False
            },
            "max_matches": {
                "type": "integer",
                "description": "Maximum number of matches to return (default: 50, max: 500)",
                "default": 50
            },
            "skip_matches": {
                "type": "integer",
                "description": "Number of matches to skip (for pagination, default: 0)",
                "default": 0
            }
        },
        "required": ["filename", "pattern"]
    }
  • Helper function resolve_log_file used by the search_log_file handler to safely resolve the log file path within configured log directories.
    def resolve_log_file(filename: str) -> tuple[Path, Path]:
        """
        Resolve a filename to a full path within allowed directories.
    
        Returns: (log_dir, log_file) tuple
        Raises: ValueError if file is not found or not in allowed directories
        """
        directories = get_log_directories()
    
        # If filename is already an absolute path, validate it's in allowed dirs
        file_path = Path(filename)
        if file_path.is_absolute():
            try:
                resolved = file_path.resolve()
                for log_dir in directories:
                    if str(resolved).startswith(str(log_dir.resolve())):
                        return log_dir, resolved
            except Exception:
                pass
            raise ValueError(f"File not in any allowed log directory: {filename}")
    
        # Try to find the file in each directory
        for log_dir in directories:
            log_file = log_dir / filename
            if log_file.exists():
                return log_dir, log_file.resolve()
    
        # If not found, use the first directory (for error messages)
        if directories:
            return directories[0], (directories[0] / filename).resolve()
    
        raise ValueError("No log directories configured")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits like returning matching lines with context and supporting pagination, but does not cover aspects such as error handling (e.g., if the file doesn't exist), performance implications (e.g., large file handling), or authentication needs. It adds some value but leaves gaps in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core functionality in the first sentence and adds supporting details concisely. Both sentences earn their place by specifying the search method, return format, and pagination support without redundancy. It is appropriately sized and structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, no annotations), the description is moderately complete. It covers the main action and key features (context, pagination) but lacks details on output format, error cases, or integration with siblings. Without an output schema, it should ideally explain return values more, but it provides enough for basic usage, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal semantic value beyond the schema, mentioning 'regex pattern' and 'pagination' which are implied by parameters like pattern and skip_matches. It does not provide additional meaning or usage examples for parameters, resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Searches a log file using regex pattern') and resource ('log file'), distinguishing it from siblings like get_log_content (which likely reads entire files) and list_log_files (which lists files). It explicitly mentions 'returns matching lines with surrounding context' and 'supports pagination,' making the purpose distinct and comprehensive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for regex-based searching with context and pagination, but does not explicitly state when to use this tool versus alternatives like get_log_content or read_log_paginated. It provides some context (e.g., 'searches a log file') but lacks clear guidance on scenarios where this is preferred over siblings, such as for filtered searches versus full file reads.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thhart/log-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server