Enables searching and querying of Elasticsearch indexes with support for 9 different log indexes, flexible time ranges, sorting, and field-based filtering through a unified API interface.
Provides access to Kibana logs through KQL-based querying, supporting multi-index searches across 1.3+ billion logs, time-based filtering with timezone support, and AI-powered log analysis and summarization.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Kibana MCP Servershow me error logs from the last hour"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
π Kibana MCP (Model Context Protocol) Server
A powerful, high-performance server that provides seamless access to Kibana and Periscope logs through a unified API. Built with modular architecture, in-memory caching, HTTP/2 support, and OpenTelemetry tracing.
π Table of Contents
π Overview
This project bridges the gap between your applications and Kibana/Periscope logs by providing:
Modular Architecture: Clean separation of concerns with dedicated modules for clients, services, and API layers
Dual Interface Support: Both Kibana (KQL) and Periscope (SQL) querying
Multi-Index Access: Query across 9 different log indexes (1.3+ billion logs)
Performance Optimized: In-memory caching, HTTP/2, and connection pooling
Timezone-Aware: Full support for international timezones (IST, UTC, PST, etc.)
Production-Ready: Comprehensive error handling, retry logic, and observability
β¨ Features
Core Features
Simple API: Easy-to-use RESTful endpoints for log searching and analysis
Dual Log System Support:
Kibana: KQL-based querying for application logs
Periscope: SQL-based querying for HTTP access logs
Multi-Index Support: Access to 9 indexes with 1.3+ billion logs
Flexible Authentication: API-based token management for both Kibana and Periscope
Time-Based Searching: Absolute and relative time ranges with full timezone support
Real-Time Streaming: Monitor logs as they arrive
Performance Features (New in v2.0.0)
β‘ In-Memory Caching:
Schema cache: 1 hour TTL
Search cache: 5 minutes TTL
π HTTP/2 Support: Multiplexed connections for faster requests
π Connection Pooling: 200 max connections, 50 keepalive
π OpenTelemetry Tracing: Distributed tracing for monitoring and debugging
π Timezone-Aware: Support for any IANA timezone without manual UTC conversion
AI & Analysis Features
π§ AI-Powered Analysis: Intelligent log summarization using Neurolink
Smart Chunking: Automatic handling of large log sets
Pattern Analysis: Tools to identify log patterns and extract errors
Cross-Index Correlation: Track requests across multiple log sources
π What's New in v2.0.0
Modular Architecture
β Clean separation:
clients/,services/,api/,models/,utils/β Improved testability and maintainability
β Better error handling and logging
β Type-safe with Pydantic models
Performance Enhancements
β In-memory caching reduces API calls
β HTTP/2 support for better throughput
β Connection pooling for efficiency
β OpenTelemetry tracing for observability
Multi-Index Support
β 9 indexes accessible (7 with active data)
β 1.3+ billion logs available
β Index discovery and selection API
β Universal
timestampfield compatibility
Enhanced Timezone Support
β Periscope queries with timezone parameter
β No manual UTC conversion needed
β Support for IST, UTC, PST, and all IANA timezones
Configuration Improvements
β Optimized
config.yaml(36% smaller)β Dynamic configuration via API
β Only essential parameters included
π Setup
Prerequisites
Python 3.8+
Access to Kibana instance (for Kibana features)
Access to Periscope instance (optional, for Periscope features)
Authentication tokens for the services you want to use
Installation
Clone this repository:
git clone https://github.com/gaharivatsa/KIBANA_SERVER.git cd KIBANA_SERVERCreate a virtual environment:
python -m venv KIBANA_E # On macOS/Linux source KIBANA_E/bin/activate # On Windows KIBANA_E\Scripts\activateInstall dependencies:
pip install -r requirements.txtMake the start script executable:
chmod +x ./run_kibana_mcp.shOptional: Set up AI-powered log analysis:
# Install Node.js if not already installed (required for Neurolink) # Visit https://nodejs.org/ or use your package manager # Set your AI provider API key export GOOGLE_AI_API_KEY="your-google-ai-api-key" # Recommended (free tier) # OR export OPENAI_API_KEY="your-openai-key" # Neurolink will be automatically set up when you start the server
Configuration
The server comes with an optimized config.yaml that works out of the box. Key settings:
Dynamic Configuration (optional):
π Authentication
Kibana Authentication
Set via API (Recommended):
How to Get Your Token:
Log in to Kibana in your browser
Open developer tools (F12)
Go to Application β Cookies
Find the authentication cookie (e.g., JWT token)
Copy the complete value
Periscope Authentication
How to Get Periscope Token:
Log in to Periscope in your browser
Open developer tools (F12)
Go to Application β Cookies
Find the
auth_tokenscookieCopy its value (base64 encoded)
π₯οΈ Running the Server
Start the server:
The server will be available at http://localhost:8000
Health Check:
Response:
π‘ API Reference
Kibana Endpoints
Endpoint | Description | Method |
| Health check | GET |
| Set Kibana authentication | POST |
| List available indexes | GET |
| Select index for searches | POST |
| MAIN - Search logs with KQL | POST |
| Get most recent logs | POST |
| Extract error logs | POST |
| π§ AI-powered analysis | POST |
Periscope Endpoints
Endpoint | Description | Method |
| Set Periscope authentication | POST |
| List available streams | GET |
| Get stream schema | POST |
| Get all schemas | GET |
| MAIN - Search with SQL | POST |
| Find HTTP errors | POST |
Utility Endpoints
Endpoint | Description | Method |
| Dynamic configuration | POST |
ποΈ Available Indexes
The server provides access to 9 log indexes (7 with active data):
Active Indexes
Index Pattern | Total Logs | Use Case | Key Fields |
breeze-v2* | 1B+ (73.5%) | Backend API, payments |
|
envoy-edge* | 137M+ (10%) | HTTP traffic, errors |
|
istio-logs-v2* | 137M+ (10%) | Service mesh |
|
squid-logs* | 7M+ (0.5%) | Proxy traffic |
|
wallet-lrw* | 887K+ (0.1%) | Wallet transactions |
|
analytics-dashboard-v2* | 336K+ | Analytics API |
|
rewards-engine-v2* | 7.5K+ | Rewards system |
|
Empty Indexes
wallet-product-v2*- No datacore-ledger-v2*- No data
Total: ~1.3 Billion logs across all indexes
π Example Usage
1. Discover and Set Index
2. Search Logs (Kibana)
Basic Search:
Search with Time Range (Timezone-Aware):
Session-Based Search:
3. Search Periscope Logs (SQL)
Find 5XX Errors:
Search with Timezone (NEW!):
Quick Error Search:
4. AI-Powered Analysis
Response (example):
5. Cross-Index Correlation
Track a request across multiple indexes:
π§ Troubleshooting
Common Issues
1. Timestamp Field Errors
Problem: "No mapping found for [timestamp] in order to sort on"
Solution: The server uses timestamp field which works for all indexes. This error should not occur in v2.0.0.
If you see it:
2. Authentication Errors (401)
Problem: "Unauthorized" or "Invalid token"
Solution:
Token expired - get a fresh token from browser
Re-authenticate using
/api/set_auth_token
3. No Results Returned
Checklist:
β Is the correct index set?
β Is the time range correct?
β Try a broader query (
"*")β Check timezone offset
4. Slow Queries
Solutions:
Reduce
max_resultsNarrow time range
Add specific query terms
Check if caching is working (should be faster on repeated queries)
Testing
β‘ Performance Features
In-Memory Caching
Automatic caching reduces load on backend systems:
Schema Cache: 1 hour TTL (Periscope stream schemas)
Search Cache: 5 minutes TTL (recent queries)
Benefits:
Faster repeated queries
Reduced API calls
Lower backend load
HTTP/2 Support
Multiplexed connections
Faster concurrent requests
Better throughput for parallel queries
Connection Pooling
Max connections: 200
Keepalive connections: 50
Efficient connection reuse
Reduced latency
OpenTelemetry Tracing
Distributed request tracing
Performance monitoring
Debug distributed issues
Track request flow across components
ποΈ Architecture
Modular Structure
Legacy vs Modular
Feature | Legacy (v1.x) | Modular (v2.0) |
Architecture | Monolithic | Modular |
Caching | β None | β In-memory |
HTTP | HTTP/1.1 | β HTTP/2 |
Tracing | β None | β OpenTelemetry |
Connection Pool | β Basic | β Advanced |
Timezone Support | β οΈ Manual | β Automatic |
Config Management | β οΈ Static | β Dynamic |
Error Handling | β οΈ Basic | β Comprehensive |
π€ AI Integration
For AI Assistants
Use the provided AI_rules.txt for generic product documentation or AI_rules_file.txt for company-specific usage.
Key Requirements:
β Always authenticate first
β Discover and set index before searching
β Use
timestampfield for sortingβ Include session_id in queries when tracking sessions
β Use ISO timestamps with timezone
Example AI Workflow
Authenticate:
POST /api/set_auth_tokenDiscover Indexes:
GET /api/discover_indexesSet Index:
POST /api/set_current_indexSearch Logs:
POST /api/search_logsAnalyze (Optional):
POST /api/summarize_logs
For complete AI integration instructions, refer to AI_rules.txt (generic) or AI_rules_file.txt (company-specific).
π Documentation
AI_rules.txt - Generic product usage guide
AI_rules_file.txt - Company-specific usage (internal)
CONFIG_USAGE_ANALYSIS.md - Configuration reference (deleted, info in this README)
KIBANA_INDEXES_COMPLETE_ANALYSIS.md - Index details (deleted, info in this README)
π Migration from v1.x
If upgrading from v1.x:
Update imports: Change from
kibana_mcp_server.pytomain.pyUpdate config: Remove unused parameters (see
config.yaml)Update queries: Use
timestampfield instead of@timestamporstart_timeTest endpoints: All endpoints remain compatible
Enjoy performance: Automatic caching and HTTP/2 benefits
π Performance Benchmarks
Cache Hit Rate: ~80% for repeated queries
Response Time: 30-50% faster with HTTP/2
Connection Reuse: 90%+ with pooling
Memory Usage: <200MB with full cache
π€ Contributing
This is a proprietary project. For issues or feature requests, contact the maintainers.
π License
This project is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
This license requires that reusers:
β Give appropriate credit (Attribution)
β Do not use for commercial purposes (NonCommercial)
β Do not distribute modified versions (NoDerivatives)
For more information, see the LICENSE file.
Version: 2.0.0 (Modular)
Last Updated: October 2025
Total Logs: 1.3+ Billion
Indexes: 9 (7 active)
Status: Production Ready β