Skip to main content
Glama

šŸ” Kibana MCP (Model Context Protocol) Server

Python 3.8+ License Architecture Version

A powerful, high-performance server that provides seamless access to Kibana and Periscope logs through a unified API. Built with modular architecture, in-memory caching, HTTP/2 support, and OpenTelemetry tracing.

šŸ“‹ Table of Contents

🌟 Overview

This project bridges the gap between your applications and Kibana/Periscope logs by providing:

  1. Modular Architecture: Clean separation of concerns with dedicated modules for clients, services, and API layers

  2. Dual Interface Support: Both Kibana (KQL) and Periscope (SQL) querying

  3. Multi-Index Access: Query across 9 different log indexes (1.3+ billion logs)

  4. Performance Optimized: In-memory caching, HTTP/2, and connection pooling

  5. Timezone-Aware: Full support for international timezones (IST, UTC, PST, etc.)

  6. Production-Ready: Comprehensive error handling, retry logic, and observability

✨ Features

Core Features

  • Simple API: Easy-to-use RESTful endpoints for log searching and analysis

  • Dual Log System Support:

    • Kibana: KQL-based querying for application logs

    • Periscope: SQL-based querying for HTTP access logs

  • Multi-Index Support: Access to 9 indexes with 1.3+ billion logs

  • Flexible Authentication: API-based token management for both Kibana and Periscope

  • Time-Based Searching: Absolute and relative time ranges with full timezone support

  • Real-Time Streaming: Monitor logs as they arrive

Performance Features (New in v2.0.0)

  • ⚔ In-Memory Caching:

    • Schema cache: 1 hour TTL

    • Search cache: 5 minutes TTL

  • šŸš€ HTTP/2 Support: Multiplexed connections for faster requests

  • šŸ”„ Connection Pooling: 200 max connections, 50 keepalive

  • šŸ“Š OpenTelemetry Tracing: Distributed tracing for monitoring and debugging

  • šŸŒ Timezone-Aware: Support for any IANA timezone without manual UTC conversion

AI & Analysis Features

  • 🧠 AI-Powered Analysis: Intelligent log summarization using Neurolink

  • Smart Chunking: Automatic handling of large log sets

  • Pattern Analysis: Tools to identify log patterns and extract errors

  • Cross-Index Correlation: Track requests across multiple log sources

šŸ†• What's New in v2.0.0

Modular Architecture

  • āœ… Clean separation: clients/, services/, api/, models/, utils/

  • āœ… Improved testability and maintainability

  • āœ… Better error handling and logging

  • āœ… Type-safe with Pydantic models

Performance Enhancements

  • āœ… In-memory caching reduces API calls

  • āœ… HTTP/2 support for better throughput

  • āœ… Connection pooling for efficiency

  • āœ… OpenTelemetry tracing for observability

Multi-Index Support

  • āœ… 9 indexes accessible (7 with active data)

  • āœ… 1.3+ billion logs available

  • āœ… Index discovery and selection API

  • āœ… Universal timestamp field compatibility

Enhanced Timezone Support

  • āœ… Periscope queries with timezone parameter

  • āœ… No manual UTC conversion needed

  • āœ… Support for IST, UTC, PST, and all IANA timezones

Configuration Improvements

  • āœ… Optimized config.yaml (36% smaller)

  • āœ… Dynamic configuration via API

  • āœ… Only essential parameters included

šŸš€ Setup

Prerequisites

  • Python 3.8+

  • Access to Kibana instance (for Kibana features)

  • Access to Periscope instance (optional, for Periscope features)

  • Authentication tokens for the services you want to use

Installation

  1. Clone this repository:

    git clone https://github.com/gaharivatsa/KIBANA_SERVER.git cd KIBANA_SERVER
  2. Create a virtual environment:

    python -m venv KIBANA_E # On macOS/Linux source KIBANA_E/bin/activate # On Windows KIBANA_E\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Make the start script executable:

    chmod +x ./run_kibana_mcp.sh
  5. Optional: Set up AI-powered log analysis:

    # Install Node.js if not already installed (required for Neurolink) # Visit https://nodejs.org/ or use your package manager # Set your AI provider API key export GOOGLE_AI_API_KEY="your-google-ai-api-key" # Recommended (free tier) # OR export OPENAI_API_KEY="your-openai-key" # Neurolink will be automatically set up when you start the server

Configuration

The server comes with an optimized config.yaml that works out of the box. Key settings:

elasticsearch: host: "" # Set via API or environment timestamp_field: "timestamp" # āœ… Works for ALL 9 indexes verify_ssl: true mcp_server: host: "0.0.0.0" port: 8000 log_level: "info" periscope: host: "" # Default: periscope.breezesdk.store timeouts: kibana_request_timeout: 30

Dynamic Configuration (optional):

curl -X POST http://localhost:8000/api/set_config \ -H "Content-Type: application/json" \ -d '{ "configs_to_set": { "elasticsearch.host": "your-kibana.example.com", "mcp_server.log_level": "debug" } }'

šŸ” Authentication

Kibana Authentication

Set via API (Recommended):

curl -X POST http://localhost:8000/api/set_auth_token \ -H "Content-Type: application/json" \ -d '{"auth_token":"YOUR_KIBANA_JWT_TOKEN"}'

How to Get Your Token:

  1. Log in to Kibana in your browser

  2. Open developer tools (F12)

  3. Go to Application → Cookies

  4. Find the authentication cookie (e.g., JWT token)

  5. Copy the complete value

Periscope Authentication

curl -X POST http://localhost:8000/api/set_periscope_auth_token \ -H "Content-Type: application/json" \ -d '{"auth_token":"YOUR_PERISCOPE_AUTH_TOKEN"}'

How to Get Periscope Token:

  1. Log in to Periscope in your browser

  2. Open developer tools (F12)

  3. Go to Application → Cookies

  4. Find the auth_tokens cookie

  5. Copy its value (base64 encoded)

šŸ–„ļø Running the Server

Start the server:

./run_kibana_mcp.sh

The server will be available at http://localhost:8000

Health Check:

curl http://localhost:8000/api/health

Response:

{ "success": true, "message": "Server is healthy", "version": "2.0.0", "status": "ok" }

šŸ“” API Reference

Kibana Endpoints

Endpoint

Description

Method

/api/health

Health check

GET

/api/set_auth_token

Set Kibana authentication

POST

/api/discover_indexes

List available indexes

GET

/api/set_current_index

Select index for searches

POST

/api/search_logs

MAIN

- Search logs with KQL

POST

/api/get_recent_logs

Get most recent logs

POST

/api/extract_errors

Extract error logs

POST

/api/summarize_logs

🧠 AI-powered analysis

POST

Periscope Endpoints

Endpoint

Description

Method

/api/set_periscope_auth_token

Set Periscope authentication

POST

/api/get_periscope_streams

List available streams

GET

/api/get_periscope_stream_schema

Get stream schema

POST

/api/get_all_periscope_schemas

Get all schemas

GET

/api/search_periscope_logs

MAIN

- Search with SQL

POST

/api/search_periscope_errors

Find HTTP errors

POST

Utility Endpoints

Endpoint

Description

Method

/api/set_config

Dynamic configuration

POST

šŸ—‚ļø Available Indexes

The server provides access to 9 log indexes (7 with active data):

Active Indexes

Index Pattern

Total Logs

Use Case

Key Fields

breeze-v2*

1B+ (73.5%)

Backend API, payments

session_id

,

message

,

level

envoy-edge*

137M+ (10%)

HTTP traffic, errors

response_code

,

path

,

duration

istio-logs-v2*

137M+ (10%)

Service mesh

level

,

message

squid-logs*

7M+ (0.5%)

Proxy traffic

level

,

message

wallet-lrw*

887K+ (0.1%)

Wallet transactions

order_id

,

txn_uuid

analytics-dashboard-v2*

336K+

Analytics API

auth

,

headers

rewards-engine-v2*

7.5K+

Rewards system

level

,

message

Empty Indexes

  • wallet-product-v2* - No data

  • core-ledger-v2* - No data

Total: ~1.3 Billion logs across all indexes

šŸ“ Example Usage

1. Discover and Set Index

# Discover available indexes curl -X GET http://localhost:8000/api/discover_indexes # Response: { "success": true, "indexes": ["breeze-v2*", "envoy-edge*", "istio-logs-v2*", ...], "count": 9 } # Set the index to use curl -X POST http://localhost:8000/api/set_current_index \ -H "Content-Type: application/json" \ -d '{"index_pattern": "breeze-v2*"}'

2. Search Logs (Kibana)

Basic Search:

curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "error OR exception", "max_results": 50, "sort_by": "timestamp", "sort_order": "desc" }'

Search with Time Range (Timezone-Aware):

curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "payment AND failed", "start_time": "2025-10-14T09:00:00+05:30", "end_time": "2025-10-14T17:00:00+05:30", "max_results": 100 }'

Session-Based Search:

curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "PcuUFbLIPLlTbBMwQXl9Y", "max_results": 200, "sort_by": "timestamp", "sort_order": "asc" }'

3. Search Periscope Logs (SQL)

Find 5XX Errors:

curl -X POST http://localhost:8000/api/search_periscope_logs \ -H "Content-Type: application/json" \ -d '{ "sql_query": "SELECT * FROM \"envoy_logs\" WHERE status_code >= '\''500'\'' AND status_code < '\''600'\''", "start_time": "1h", "max_results": 50 }'

Search with Timezone (NEW!):

curl -X POST http://localhost:8000/api/search_periscope_logs \ -H "Content-Type: application/json" \ -d '{ "sql_query": "SELECT * FROM \"envoy_logs\" WHERE status_code >= '\''500'\''", "start_time": "2025-10-14 09:00:00", "end_time": "2025-10-14 13:00:00", "timezone": "Asia/Kolkata", "max_results": 100 }'

Quick Error Search:

curl -X POST http://localhost:8000/api/search_periscope_errors \ -H "Content-Type: application/json" \ -d '{ "hours": 1, "stream": "envoy_logs", "error_codes": "5%", "timezone": "Asia/Kolkata" }'

4. AI-Powered Analysis

curl -X POST http://localhost:8000/api/summarize_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "error", "max_results": 50, "start_time": "1h" }'

Response (example):

{ "success": true, "analysis": { "summary": "Analysis of 42 error logs showing payment processing failures", "key_insights": [ "Payment gateway returned 503 errors for 8 transactions", "Retry mechanism activated in 67% of failed cases" ], "errors": [ "PaymentGatewayError: Service temporarily unavailable (503)" ], "function_calls": ["processPayment()", "retryTransaction()"], "recommendations": [ "Implement circuit breaker for payment gateway", "Add monitoring alerts for gateway health" ] } }

5. Cross-Index Correlation

Track a request across multiple indexes:

# Step 1: Check HTTP layer (envoy-edge) curl -X POST http://localhost:8000/api/set_current_index \ -H "Content-Type: application/json" \ -d '{"index_pattern": "envoy-edge*"}' curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "x_session_id:abc123", "max_results": 50 }' # Step 2: Check backend processing (breeze-v2) curl -X POST http://localhost:8000/api/set_current_index \ -H "Content-Type: application/json" \ -d '{"index_pattern": "breeze-v2*"}' curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "abc123", "max_results": 200, "sort_order": "asc" }'

šŸ”§ Troubleshooting

Common Issues

1. Timestamp Field Errors

Problem: "No mapping found for [timestamp] in order to sort on"

Solution: The server uses timestamp field which works for all indexes. This error should not occur in v2.0.0.

If you see it:

curl -X POST http://localhost:8000/api/set_config \ -H "Content-Type: application/json" \ -d '{ "configs_to_set": { "elasticsearch.timestamp_field": "@timestamp" } }'

2. Authentication Errors (401)

Problem: "Unauthorized" or "Invalid token"

Solution:

  • Token expired - get a fresh token from browser

  • Re-authenticate using /api/set_auth_token

3. No Results Returned

Checklist:

  1. āœ… Is the correct index set?

  2. āœ… Is the time range correct?

  3. āœ… Try a broader query ("*")

  4. āœ… Check timezone offset

4. Slow Queries

Solutions:

  • Reduce max_results

  • Narrow time range

  • Add specific query terms

  • Check if caching is working (should be faster on repeated queries)

Testing

# Test Kibana connectivity curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{"query_text": "*", "max_results": 1}' # Test Periscope connectivity curl -X GET http://localhost:8000/api/get_periscope_streams

⚔ Performance Features

In-Memory Caching

Automatic caching reduces load on backend systems:

  • Schema Cache: 1 hour TTL (Periscope stream schemas)

  • Search Cache: 5 minutes TTL (recent queries)

Benefits:

  • Faster repeated queries

  • Reduced API calls

  • Lower backend load

HTTP/2 Support

  • Multiplexed connections

  • Faster concurrent requests

  • Better throughput for parallel queries

Connection Pooling

  • Max connections: 200

  • Keepalive connections: 50

  • Efficient connection reuse

  • Reduced latency

OpenTelemetry Tracing

  • Distributed request tracing

  • Performance monitoring

  • Debug distributed issues

  • Track request flow across components

šŸ—ļø Architecture

Modular Structure

KIBANA_SERVER/ ā”œā”€ā”€ main.py # Server entry point ā”œā”€ā”€ config.yaml # Configuration ā”œā”€ā”€ requirements.txt # Dependencies ā”œā”€ā”€ src/ │ ā”œā”€ā”€ api/ │ │ ā”œā”€ā”€ app.py # FastAPI application │ │ └── http/ │ │ └── routes.py # API endpoints │ ā”œā”€ā”€ clients/ │ │ ā”œā”€ā”€ kibana_client.py # Kibana API client │ │ ā”œā”€ā”€ periscope_client.py # Periscope API client │ │ ā”œā”€ā”€ http_manager.py # HTTP/2 + pooling │ │ └── retry_manager.py # Retry logic │ ā”œā”€ā”€ services/ │ │ └── log_service.py # Business logic │ ā”œā”€ā”€ models/ │ │ ā”œā”€ā”€ requests.py # Request models │ │ └── responses.py # Response models │ ā”œā”€ā”€ utils/ │ │ └── cache.py # Caching utilities │ ā”œā”€ā”€ observability/ │ │ └── tracing.py # OpenTelemetry │ ā”œā”€ā”€ security/ │ │ └── sanitizers.py # Input validation │ └── core/ │ ā”œā”€ā”€ config.py # Configuration │ ā”œā”€ā”€ constants.py # Constants │ └── logging_config.py # Logging └── AI_rules.txt # Generic AI guide

Legacy vs Modular

Feature

Legacy (v1.x)

Modular (v2.0)

Architecture

Monolithic

Modular

Caching

āŒ None

āœ… In-memory

HTTP

HTTP/1.1

āœ… HTTP/2

Tracing

āŒ None

āœ… OpenTelemetry

Connection Pool

āŒ Basic

āœ… Advanced

Timezone Support

āš ļø Manual

āœ… Automatic

Config Management

āš ļø Static

āœ… Dynamic

Error Handling

āš ļø Basic

āœ… Comprehensive

šŸ¤– AI Integration

For AI Assistants

Use the provided AI_rules.txt for generic product documentation or AI_rules_file.txt for company-specific usage.

Key Requirements:

  • āœ… Always authenticate first

  • āœ… Discover and set index before searching

  • āœ… Use timestamp field for sorting

  • āœ… Include session_id in queries when tracking sessions

  • āœ… Use ISO timestamps with timezone

Example AI Workflow

  1. Authenticate:

    POST /api/set_auth_token
  2. Discover Indexes:

    GET /api/discover_indexes
  3. Set Index:

    POST /api/set_current_index
  4. Search Logs:

    POST /api/search_logs
  5. Analyze (Optional):

    POST /api/summarize_logs

For complete AI integration instructions, refer to AI_rules.txt (generic) or AI_rules_file.txt (company-specific).

šŸ“š Documentation

  • AI_rules.txt - Generic product usage guide

  • AI_rules_file.txt - Company-specific usage (internal)

  • CONFIG_USAGE_ANALYSIS.md - Configuration reference (deleted, info in this README)

  • KIBANA_INDEXES_COMPLETE_ANALYSIS.md - Index details (deleted, info in this README)

šŸ”„ Migration from v1.x

If upgrading from v1.x:

  1. Update imports: Change from kibana_mcp_server.py to main.py

  2. Update config: Remove unused parameters (see config.yaml)

  3. Update queries: Use timestamp field instead of @timestamp or start_time

  4. Test endpoints: All endpoints remain compatible

  5. Enjoy performance: Automatic caching and HTTP/2 benefits

šŸ“Š Performance Benchmarks

  • Cache Hit Rate: ~80% for repeated queries

  • Response Time: 30-50% faster with HTTP/2

  • Connection Reuse: 90%+ with pooling

  • Memory Usage: <200MB with full cache

šŸ¤ Contributing

This is a proprietary project. For issues or feature requests, contact the maintainers.

šŸ“œ License

This project is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).

License: CC BY-NC-ND 4.0

This license requires that reusers:

  • āœ… Give appropriate credit (Attribution)

  • āŒ Do not use for commercial purposes (NonCommercial)

  • āŒ Do not distribute modified versions (NoDerivatives)

For more information, see the LICENSE file.


Version: 2.0.0 (Modular)
Last Updated: October 2025
Total Logs: 1.3+ Billion
Indexes: 9 (7 active)
Status: Production Ready āœ…

-
security - not tested
F
license - not found
-
quality - not tested

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gaharivatsa/KIBANA_SERVER'

If you have feedback or need assistance with the MCP directory API, please join our Discord server