Skip to main content
Glama
suriya-ML

Log Analyzer MCP Server

by suriya-ML

Log Analyzer MCP Server πŸš€

100% Local | FAISS-Powered | No Cloud APIs | 30-150x Faster

A Model Context Protocol (MCP) server for intelligent log analysis with semantic search, error detection, and pattern clustering. Runs entirely locally using sentence-transformers and FAISS.

GitHub

✨ Features

  • πŸ” Semantic Search - Find logs by meaning, not just keywords

  • ⚑ FAISS Vector Search - 30-150x faster than traditional search

  • πŸ› Smart Error Detection - Automatic error pattern clustering

  • πŸ’Ύ Intelligent Caching - Lightning-fast re-indexing

  • 🏠 100% Local - No cloud APIs, no costs, privacy-first

  • πŸ“Š Hybrid Retrieval - Combines semantic + lexical matching

🎯 Quick Start (Production)

# Install uv powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

Claude Desktop Config:

{ "mcpServers": { "log-analyzer": { "command": "uvx", "args": [ "--from", "git+https://github.com/suriya-ML/log-checker-mcp.git", "log-analyzer-mcp" ] } } }

Config Location: C:\Users\YOUR-USERNAME\AppData\Roaming\Claude\claude_desktop_config.json

Restart Claude Desktop and you're done! βœ…

πŸ“¦ Manual Installation

1. Clone the Repository

git clone https://github.com/suriya-ML/log-checker-mcp.git cd log-checker-mcp

2. Install Dependencies

pip install -r requirements.txt

3. Configure Environment Variables

Create a .env file in the project root:

cp .env.example .env

Edit .env and add your AWS credentials:

AWS_ACCESS_KEY_ID=your_access_key_here AWS_SECRET_ACCESS_KEY=your_secret_key_here AWS_REGION=us-east-2

Usage

Running the Server Locally

python server.py

Configuring with Claude Desktop

Add to your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{ "mcpServers": { "log-analyzer": { "command": "python", "args": ["/absolute/path/to/log-analyzer-mcp/server.py"], "env": { "AWS_ACCESS_KEY_ID": "your_key", "AWS_SECRET_ACCESS_KEY": "your_secret", "AWS_REGION": "us-east-2" } } } }

Available Tools

1. fetch_local_logs

Fetch and chunk log files from a local directory.

Parameters:

  • input_folder (optional): Path to folder containing log files (default: ./logs)

  • chunk_size (optional): Size of each chunk in characters (default: 4096)

  • overlap (optional): Overlap between chunks in characters (default: 1024)

Example:

Use fetch_local_logs to process logs from /path/to/logs with chunk_size 5000

2. store_chunks_as_vectors

Vectorize log chunks with AWS Bedrock embeddings and intelligent caching.

Parameters:

  • use_cache (optional): Whether to use embedding cache (default: true)

  • clear_cache (optional): Clear cache before starting (default: false)

Features:

  • Extracts timeframes, class names, method names, error types

  • Parallel processing for fast vectorization

  • Persistent caching to avoid re-embedding

Example:

Use store_chunks_as_vectors to vectorize the logs

3. query_SFlogs

Query vectorized logs with semantic search and comprehensive analysis.

Parameters:

  • query (required): Natural language query

Features:

  • Hybrid semantic + lexical search

  • Automatic error clustering and deduplication

  • Severity ranking and frequency analysis

  • Metadata extraction (timeframes, classes, methods)

  • AI-powered summarization

Examples:

Query logs: "What NullPointerExceptions occurred?" Query logs: "Summarize all errors" Query logs: "Show timeout issues in UserHandler"

Configuration

Environment Variables

Variable

Description

Default

AWS_ACCESS_KEY_ID

AWS access key

Required

AWS_SECRET_ACCESS_KEY

AWS secret key

Required

AWS_REGION

AWS region

us-east-2

AWS_CONNECT_TIMEOUT

Connection timeout (seconds)

60

AWS_READ_TIMEOUT

Read timeout (seconds)

300

BEDROCK_EMBED_MODEL_ID

Embedding model

amazon.titan-embed-text-v2:0

BEDROCK_NOVA_MODEL_ID

Analysis model

amazon.nova-premier-v1:0

LOG_FOLDER

Default log folder

./logs

DEFAULT_CHUNK_SIZE

Default chunk size

4096

DEFAULT_OVERLAP

Default overlap

1024

Architecture

log-analyzer-mcp/ β”œβ”€β”€ server.py # Main MCP server implementation β”œβ”€β”€ config.py # Configuration management β”œβ”€β”€ utils/ # Utility modules β”‚ β”œβ”€β”€ logging_utils.py # Logging configuration β”‚ β”œβ”€β”€ file_utils.py # File operations β”‚ β”œβ”€β”€ bedrock_utils.py # AWS Bedrock integration β”‚ β”œβ”€β”€ chunking_utils.py # Text chunking β”‚ └── error_extraction.py # Error pattern extraction β”œβ”€β”€ logs/ # Log storage (created automatically) β”œβ”€β”€ requirements.txt # Python dependencies β”œβ”€β”€ .env.example # Environment template └── README.md # This file

How It Works

1. Log Processing Pipeline

Raw Logs β†’ Chunking β†’ Metadata Extraction β†’ Vectorization β†’ Storage
  • Chunking: Split logs into overlapping chunks for better context preservation

  • Metadata Extraction: Extract timeframes, class names, methods, error types

  • Vectorization: Generate embeddings using AWS Bedrock

  • Caching: Store embeddings for fast re-processing

2. Query Pipeline

Query β†’ Embedding β†’ Hybrid Search β†’ Error Clustering β†’ AI Analysis β†’ Results
  • Hybrid Search: Combine semantic similarity with lexical matching

  • Error Clustering: Group similar errors using fingerprinting

  • Ranking: Sort by severity and frequency

  • AI Analysis: Generate comprehensive summaries with AWS Bedrock

Performance

  • Parallel Processing: Up to 5 concurrent embedding requests

  • Intelligent Caching: 70-90% cache hit rate on repeated processing

  • Adaptive Retrieval: Dynamic top-k based on query type

  • Token Optimization: Smart budget management for AI analysis

Troubleshooting

Common Issues

"No vector JSON found"

  • Run store_chunks_as_vectors first to vectorize your logs

"Bedrock authentication failed"

  • Verify your AWS credentials in .env

  • Ensure your AWS account has Bedrock access enabled

"No chunks found"

  • Check that log files exist in the configured folder

  • Verify file extensions (.log, .txt) are correct

Logging

Logs are written to stderr for MCP compatibility. To debug:

python server.py 2> debug.log

Contributing

Contributions welcome! Please:

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes

  4. Submit a pull request

License

MIT License - see LICENSE file for details

Support

For issues and questions:

Roadmap

  • Support for additional embedding models

  • Real-time log streaming

  • Web UI for visualization

  • Multi-language support

  • Enhanced error pattern detection

  • Integration with monitoring tools

Acknowledgments

Built with:

-
security - not tested
A
license - permissive license
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/suriya-ML/log-checker-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server