Skip to main content
Glama
Fato07
by Fato07

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
log_analyzer_parse
Parse and analyze a log file, detecting its format and extracting metadata. Args: file_path: Path to the log file to analyze format_hint: Force specific format (syslog, apache_access, apache_error, jsonl, docker, python, java, kubernetes, generic) or None for auto-detect max_lines: Maximum lines to parse (100-100000, default 10000) response_format: Output format - 'markdown' or 'json' Returns: Analysis results including detected format, time range, level distribution, and sample entries.
log_analyzer_search
Search for patterns in a log file with context lines. Args: file_path: Path to the log file to search pattern: Search pattern (regex or plain text) is_regex: Treat pattern as regex (default: False, plain text) case_sensitive: Case-sensitive search (default: False) context_lines: Lines of context before/after match (0-10, default: 3) max_matches: Maximum matches to return (1-200, default: 50) level_filter: Filter by log level (ERROR, WARN, INFO, DEBUG) response_format: Output format - 'markdown' or 'json' Returns: Search results with matches and surrounding context.
log_analyzer_extract_errors
Extract all errors and exceptions from a log file with stack traces. Args: file_path: Path to the log file include_warnings: Include WARN level entries (default: False) group_similar: Group similar error messages (default: True) max_errors: Maximum errors to return (1-500, default: 100) response_format: Output format - 'markdown' or 'json' Returns: Extracted errors grouped by similarity with occurrence counts, timestamps, and sample stack traces.
log_analyzer_summarize
Generate a debugging summary of a log file. Args: file_path: Path to the log file focus: Focus area - 'errors', 'performance', 'security', or 'all' (default) max_lines: Maximum lines to analyze (100-100000, default: 10000) response_format: Output format - 'markdown' or 'json' Returns: Summary including file overview, level distribution, top errors, anomalies detected, and recommended investigation areas.
log_analyzer_tail
Get the most recent log entries from a file. Args: file_path: Path to the log file lines: Number of lines to return (1-1000, default: 100) level_filter: Filter by log level (ERROR, WARN, INFO, DEBUG) response_format: Output format - 'markdown' or 'json' Returns: The last N log entries, parsed and formatted.
log_analyzer_correlate
Correlate events around anchor points in a log file. Args: file_path: Path to the log file anchor_pattern: Pattern to anchor correlation around (regex) window_seconds: Time window in seconds around anchor (1-3600, default: 60) max_anchors: Maximum anchor points to analyze (1-50, default: 10) response_format: Output format - 'markdown' or 'json' Returns: Correlated events around each anchor point, showing what happened before and after the anchor event.
log_analyzer_diff
Compare log files or time periods within a log file. Args: file_path_a: First log file path file_path_b: Second log file path (optional - for comparing two files) time_range_a_start: Start time for first period (ISO format, for time comparison) time_range_a_end: End time for first period (ISO format) time_range_b_start: Start time for second period (ISO format) time_range_b_end: End time for second period (ISO format) response_format: Output format - 'markdown' or 'json' Returns: Comparison showing new errors, resolved errors, and volume changes.
log_analyzer_watch
Watch a log file for new entries since a given position. This enables polling-based log watching. First call with from_position=0 returns the current end-of-file position. Subsequent calls with the returned position get new entries added since then. Args: file_path: Path to the log file to watch from_position: File position to read from. Use 0 for initial call (returns current end position), or use the returned current_position from a previous call. max_lines: Maximum lines to read per call (1-1000, default: 100) level_filter: Filter by log levels, comma-separated (e.g., "ERROR,WARN") pattern_filter: Regex pattern to filter messages response_format: Output format - 'markdown' or 'json' Returns: New log entries since the last position, with updated position for the next call.
log_analyzer_suggest_patterns
Analyze a log file and suggest useful search patterns. Scans the log content to identify patterns for: - Common error templates (normalized messages) - Identifiers (UUIDs, request IDs, user IDs, session IDs) - Security indicators (auth failures, suspicious activity) - Performance indicators (slow requests, high memory) - HTTP endpoints with errors Args: file_path: Path to the log file to analyze focus: Analysis focus - 'all', 'errors', 'security', 'performance', or 'identifiers' (default: 'all') max_patterns: Maximum patterns to suggest (1-20, default: 10) max_lines: Maximum lines to analyze (100-100000, default: 10000) response_format: Output format - 'markdown' or 'json' Returns: Suggested search patterns with descriptions, match counts, and examples.
log_analyzer_trace
Extract and follow trace/correlation IDs across log entries. Automatically detects trace IDs (OpenTelemetry, X-Request-ID, AWS X-Ray, UUID) and groups related log entries to show request flows through your system. Args: file_path: Path to the log file to analyze trace_id: Specific trace ID to filter for (None for all traces) max_traces: Maximum number of trace groups to return (1-500, default: 100) max_lines: Maximum lines to process (100-100000, default: 10000) response_format: Output format - 'markdown' or 'json' Returns: Trace groups showing request flows, including trace ID types detected, entry counts, time spans, and error indicators.
log_analyzer_multi
Analyze multiple log files together for cross-file debugging. Supports three operations: - merge: Interleave entries by timestamp (like 'sort -m') - correlate: Find events happening across files within time window - compare: Diff error patterns between files Args: file_paths: List of log file paths to analyze (2-10 files) operation: Analysis operation - 'merge', 'correlate', or 'compare' (default: 'merge') time_window: Time window in seconds for correlation (1-3600, default: 60) max_entries: Maximum entries to return (100-5000, default: 1000) response_format: Output format - 'markdown' or 'json' Returns: Combined analysis results based on the selected operation.
log_analyzer_ask
Answer questions about log files using AI-assisted analysis. Translates natural language questions into appropriate log analysis operations and provides intelligent, contextual answers. Example questions: - "Why did the database connection fail?" - "How many errors occurred in the last hour?" - "What happened before the server crashed?" - "Show me all authentication failures" - "When did the first timeout occur?" Args: file_path: Path to the log file to analyze question: Natural language question about the logs max_results: Maximum supporting entries to include (10-200, default: 50) response_format: Output format - 'markdown' or 'json' Returns: Natural language answer with supporting log entries and suggestions.
log_analyzer_scan_sensitive
Detect sensitive data in logs (PII, credentials, API keys). Scans log files for potentially sensitive information including: - Email addresses - Credit card numbers (Visa, MasterCard, Amex) - API keys and tokens (AWS, GitHub, Slack, generic) - Passwords in URLs or config - Social Security Numbers (SSN) - JWT and Bearer tokens - Database connection strings - Private key markers - Phone numbers - IP addresses (optional) Args: file_path: Path to the log file to scan redact: Redact sensitive data in output (default: False) categories: Filter to specific categories. Options: email, credit_card, api_key, token, password, ssn, ip_address, phone, connection_string, private_key include_ips: Include IP address detection (default: False) max_matches: Maximum matches to return (1-500, default: 100) max_lines: Maximum lines to scan (1-1000000, default: 100000) response_format: Output format - 'markdown' or 'json' Returns: Sensitive data scan results with matches and statistics.
log_analyzer_suggest_format
Analyze a log file and suggest the best parsing approach. Returns detailed format detection information including: - Detected format with confidence score - Alternative formats to try if confidence is low - Sample of unparseable lines with suggestions - Custom pattern suggestions for generic parser Args: file_path: Path to the log file to analyze sample_size: Number of lines to sample for analysis (default: 100) response_format: Output format - 'markdown' or 'json' Returns: Format suggestions and analysis results

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Fato07/log-analyzer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server