Integrates with AWS Bedrock to provide advanced log analysis capabilities, including semantic search with Amazon Titan embeddings and AI-powered error clustering and summarization using Amazon Nova models.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Log Analyzer MCP Serversummarize the most frequent error patterns found in my server logs"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Log Analyzer MCP Server π
100% Local | FAISS-Powered | No Cloud APIs | 30-150x Faster
A Model Context Protocol (MCP) server for intelligent log analysis with semantic search, error detection, and pattern clustering. Runs entirely locally using sentence-transformers and FAISS.
β¨ Features
π Semantic Search - Find logs by meaning, not just keywords
β‘ FAISS Vector Search - 30-150x faster than traditional search
π Smart Error Detection - Automatic error pattern clustering
πΎ Intelligent Caching - Lightning-fast re-indexing
π 100% Local - No cloud APIs, no costs, privacy-first
π Hybrid Retrieval - Combines semantic + lexical matching
π― Quick Start (Production)
Using uvx (Recommended)
Claude Desktop Config:
Config Location: C:\Users\YOUR-USERNAME\AppData\Roaming\Claude\claude_desktop_config.json
Restart Claude Desktop and you're done! β
π¦ Manual Installation
1. Clone the Repository
2. Install Dependencies
3. Configure Environment Variables
Create a .env file in the project root:
Edit .env and add your AWS credentials:
Usage
Running the Server Locally
Configuring with Claude Desktop
Add to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Available Tools
1. fetch_local_logs
Fetch and chunk log files from a local directory.
Parameters:
input_folder(optional): Path to folder containing log files (default: ./logs)chunk_size(optional): Size of each chunk in characters (default: 4096)overlap(optional): Overlap between chunks in characters (default: 1024)
Example:
2. store_chunks_as_vectors
Vectorize log chunks with AWS Bedrock embeddings and intelligent caching.
Parameters:
use_cache(optional): Whether to use embedding cache (default: true)clear_cache(optional): Clear cache before starting (default: false)
Features:
Extracts timeframes, class names, method names, error types
Parallel processing for fast vectorization
Persistent caching to avoid re-embedding
Example:
3. query_SFlogs
Query vectorized logs with semantic search and comprehensive analysis.
Parameters:
query(required): Natural language query
Features:
Hybrid semantic + lexical search
Automatic error clustering and deduplication
Severity ranking and frequency analysis
Metadata extraction (timeframes, classes, methods)
AI-powered summarization
Examples:
Configuration
Environment Variables
Variable | Description | Default |
| AWS access key | Required |
| AWS secret key | Required |
| AWS region | us-east-2 |
| Connection timeout (seconds) | 60 |
| Read timeout (seconds) | 300 |
| Embedding model | amazon.titan-embed-text-v2:0 |
| Analysis model | amazon.nova-premier-v1:0 |
| Default log folder | ./logs |
| Default chunk size | 4096 |
| Default overlap | 1024 |
Architecture
How It Works
1. Log Processing Pipeline
Chunking: Split logs into overlapping chunks for better context preservation
Metadata Extraction: Extract timeframes, class names, methods, error types
Vectorization: Generate embeddings using AWS Bedrock
Caching: Store embeddings for fast re-processing
2. Query Pipeline
Hybrid Search: Combine semantic similarity with lexical matching
Error Clustering: Group similar errors using fingerprinting
Ranking: Sort by severity and frequency
AI Analysis: Generate comprehensive summaries with AWS Bedrock
Performance
Parallel Processing: Up to 5 concurrent embedding requests
Intelligent Caching: 70-90% cache hit rate on repeated processing
Adaptive Retrieval: Dynamic top-k based on query type
Token Optimization: Smart budget management for AI analysis
Troubleshooting
Common Issues
"No vector JSON found"
Run
store_chunks_as_vectorsfirst to vectorize your logs
"Bedrock authentication failed"
Verify your AWS credentials in
.envEnsure your AWS account has Bedrock access enabled
"No chunks found"
Check that log files exist in the configured folder
Verify file extensions (.log, .txt) are correct
Logging
Logs are written to stderr for MCP compatibility. To debug:
Contributing
Contributions welcome! Please:
Fork the repository
Create a feature branch
Make your changes
Submit a pull request
License
MIT License - see LICENSE file for details
Support
For issues and questions:
GitHub Issues: Create an issue
Documentation: Wiki
Roadmap
Support for additional embedding models
Real-time log streaming
Web UI for visualization
Multi-language support
Enhanced error pattern detection
Integration with monitoring tools
Acknowledgments
Built with:
Model Context Protocol - MCP specification
AWS Bedrock - AI/ML capabilities
Anthropic Claude - AI analysis