Skip to main content
Glama
gaharivatsa

Kibana MCP Server

by gaharivatsa

πŸ” Kibana MCP (Model Context Protocol) Server

Python 3.8+ License Architecture Version

A powerful, high-performance server that provides seamless access to Kibana and Periscope logs through a unified API. Built with modular architecture, in-memory caching, HTTP/2 support, and OpenTelemetry tracing.

πŸ“‹ Table of Contents

🌟 Overview

This project bridges the gap between your applications and Kibana/Periscope logs by providing:

  1. Modular Architecture: Clean separation of concerns with dedicated modules for clients, services, and API layers

  2. Dual Interface Support: Both Kibana (KQL) and Periscope (SQL) querying

  3. Multi-Index Access: Query across 9 different log indexes (1.3+ billion logs)

  4. Performance Optimized: In-memory caching, HTTP/2, and connection pooling

  5. Timezone-Aware: Full support for international timezones (IST, UTC, PST, etc.)

  6. Production-Ready: Comprehensive error handling, retry logic, and observability

✨ Features

Core Features

  • Simple API: Easy-to-use RESTful endpoints for log searching and analysis

  • Dual Log System Support:

    • Kibana: KQL-based querying for application logs

    • Periscope: SQL-based querying for HTTP access logs

  • Multi-Index Support: Access to 9 indexes with 1.3+ billion logs

  • Flexible Authentication: API-based token management for both Kibana and Periscope

  • Time-Based Searching: Absolute and relative time ranges with full timezone support

  • Real-Time Streaming: Monitor logs as they arrive

Performance Features (New in v2.0.0)

  • ⚑ In-Memory Caching:

    • Schema cache: 1 hour TTL

    • Search cache: 5 minutes TTL

  • πŸš€ HTTP/2 Support: Multiplexed connections for faster requests

  • πŸ”„ Connection Pooling: 200 max connections, 50 keepalive

  • πŸ“Š OpenTelemetry Tracing: Distributed tracing for monitoring and debugging

  • 🌍 Timezone-Aware: Support for any IANA timezone without manual UTC conversion

AI & Analysis Features

  • 🧠 AI-Powered Analysis: Intelligent log summarization using Neurolink

  • Smart Chunking: Automatic handling of large log sets

  • Pattern Analysis: Tools to identify log patterns and extract errors

  • Cross-Index Correlation: Track requests across multiple log sources

πŸ†• What's New in v2.0.0

Modular Architecture

  • βœ… Clean separation: clients/, services/, api/, models/, utils/

  • βœ… Improved testability and maintainability

  • βœ… Better error handling and logging

  • βœ… Type-safe with Pydantic models

Performance Enhancements

  • βœ… In-memory caching reduces API calls

  • βœ… HTTP/2 support for better throughput

  • βœ… Connection pooling for efficiency

  • βœ… OpenTelemetry tracing for observability

Multi-Index Support

  • βœ… 9 indexes accessible (7 with active data)

  • βœ… 1.3+ billion logs available

  • βœ… Index discovery and selection API

  • βœ… Universal timestamp field compatibility

Enhanced Timezone Support

  • βœ… Periscope queries with timezone parameter

  • βœ… No manual UTC conversion needed

  • βœ… Support for IST, UTC, PST, and all IANA timezones

Configuration Improvements

  • βœ… Optimized config.yaml (36% smaller)

  • βœ… Dynamic configuration via API

  • βœ… Only essential parameters included

πŸš€ Setup

Prerequisites

  • Python 3.8+

  • Access to Kibana instance (for Kibana features)

  • Access to Periscope instance (optional, for Periscope features)

  • Authentication tokens for the services you want to use

Installation

  1. Clone this repository:

    git clone https://github.com/gaharivatsa/KIBANA_SERVER.git cd KIBANA_SERVER
  2. Create a virtual environment:

    python -m venv KIBANA_E # On macOS/Linux source KIBANA_E/bin/activate # On Windows KIBANA_E\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Make the start script executable:

    chmod +x ./run_kibana_mcp.sh
  5. Optional: Set up AI-powered log analysis:

    # Install Node.js if not already installed (required for Neurolink) # Visit https://nodejs.org/ or use your package manager # Set your AI provider API key export GOOGLE_AI_API_KEY="your-google-ai-api-key" # Recommended (free tier) # OR export OPENAI_API_KEY="your-openai-key" # Neurolink will be automatically set up when you start the server

Configuration

The server comes with an optimized config.yaml that works out of the box. Key settings:

elasticsearch: host: "" # Set via API or environment timestamp_field: "timestamp" # βœ… Works for ALL 9 indexes verify_ssl: true mcp_server: host: "0.0.0.0" port: 8000 log_level: "info" periscope: host: "" timeouts: kibana_request_timeout: 30

Dynamic Configuration (optional):

curl -X POST http://localhost:8000/api/set_config \ -H "Content-Type: application/json" \ -d '{ "configs_to_set": { "elasticsearch.host": "your-kibana.example.com", "mcp_server.log_level": "debug" } }'

πŸ” Authentication

Kibana Authentication

Set via API (Recommended):

curl -X POST http://localhost:8000/api/set_auth_token \ -H "Content-Type: application/json" \ -d '{"auth_token":"YOUR_KIBANA_JWT_TOKEN"}'

How to Get Your Token:

  1. Log in to Kibana in your browser

  2. Open developer tools (F12)

  3. Go to Application β†’ Cookies

  4. Find the authentication cookie (e.g., JWT token)

  5. Copy the complete value

Periscope Authentication

curl -X POST http://localhost:8000/api/set_periscope_auth_token \ -H "Content-Type: application/json" \ -d '{"auth_token":"YOUR_PERISCOPE_AUTH_TOKEN"}'

How to Get Periscope Token:

  1. Log in to Periscope in your browser

  2. Open developer tools (F12)

  3. Go to Application β†’ Cookies

  4. Find the auth_tokens cookie

  5. Copy its value (base64 encoded)

πŸ–₯️ Running the Server

Start the server:

./run_kibana_mcp.sh

The server will be available at http://localhost:8000

Health Check:

curl http://localhost:8000/api/health

Response:

{ "success": true, "message": "Server is healthy", "version": "2.0.0", "status": "ok" }

πŸ“‘ API Reference

Kibana Endpoints

Endpoint

Description

Method

/api/health

Health check

GET

/api/set_auth_token

Set Kibana authentication

POST

/api/discover_indexes

List available indexes

GET

/api/set_current_index

Select index for searches

POST

/api/search_logs

MAIN - Search logs with KQL

POST

/api/get_recent_logs

Get most recent logs

POST

/api/extract_errors

Extract error logs

POST

/api/summarize_logs

🧠 AI-powered analysis

POST

Periscope Endpoints

Endpoint

Description

Method

/api/set_periscope_auth_token

Set Periscope authentication

POST

/api/get_periscope_streams

List available streams

GET

/api/get_periscope_stream_schema

Get stream schema

POST

/api/get_all_periscope_schemas

Get all schemas

GET

/api/search_periscope_logs

MAIN - Search with SQL

POST

/api/search_periscope_errors

Find HTTP errors

POST

Utility Endpoints

Endpoint

Description

Method

/api/set_config

Dynamic configuration

POST

πŸ—‚οΈ Available Indexes

The server provides access to 9 log indexes (7 with active data):

Active Indexes

Index Pattern

Total Logs

Use Case

Key Fields

breeze-v2*

1B+ (73.5%)

Backend API, payments

session_id, message, level

envoy-edge*

137M+ (10%)

HTTP traffic, errors

response_code, path, duration

istio-logs-v2*

137M+ (10%)

Service mesh

level, message

squid-logs*

7M+ (0.5%)

Proxy traffic

level, message

wallet-lrw*

887K+ (0.1%)

Wallet transactions

order_id, txn_uuid

analytics-dashboard-v2*

336K+

Analytics API

auth, headers

rewards-engine-v2*

7.5K+

Rewards system

level, message

Empty Indexes

  • wallet-product-v2* - No data

  • core-ledger-v2* - No data

Total: ~1.3 Billion logs across all indexes

πŸ“ Example Usage

1. Discover and Set Index

# Discover available indexes curl -X GET http://localhost:8000/api/discover_indexes # Response: { "success": true, "indexes": ["breeze-v2*", "envoy-edge*", "istio-logs-v2*", ...], "count": 9 } # Set the index to use curl -X POST http://localhost:8000/api/set_current_index \ -H "Content-Type: application/json" \ -d '{"index_pattern": "breeze-v2*"}'

2. Search Logs (Kibana)

Basic Search:

curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "error OR exception", "max_results": 50, "sort_by": "timestamp", "sort_order": "desc" }'

Search with Time Range (Timezone-Aware):

curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "payment AND failed", "start_time": "2025-10-14T09:00:00+05:30", "end_time": "2025-10-14T17:00:00+05:30", "max_results": 100 }'

Session-Based Search:

curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "PcuUFbLIPLlTbBMwQXl9Y", "max_results": 200, "sort_by": "timestamp", "sort_order": "asc" }'

3. Search Periscope Logs (SQL)

Find 5XX Errors:

curl -X POST http://localhost:8000/api/search_periscope_logs \ -H "Content-Type: application/json" \ -d '{ "sql_query": "SELECT * FROM \"envoy_logs\" WHERE status_code >= '\''500'\'' AND status_code < '\''600'\''", "start_time": "1h", "max_results": 50 }'

Search with Timezone (NEW!):

curl -X POST http://localhost:8000/api/search_periscope_logs \ -H "Content-Type: application/json" \ -d '{ "sql_query": "SELECT * FROM \"envoy_logs\" WHERE status_code >= '\''500'\''", "start_time": "2025-10-14 09:00:00", "end_time": "2025-10-14 13:00:00", "timezone": "Asia/Kolkata", "max_results": 100 }'

Quick Error Search:

curl -X POST http://localhost:8000/api/search_periscope_errors \ -H "Content-Type: application/json" \ -d '{ "hours": 1, "stream": "envoy_logs", "error_codes": "5%", "timezone": "Asia/Kolkata" }'

4. AI-Powered Analysis

curl -X POST http://localhost:8000/api/summarize_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "error", "max_results": 50, "start_time": "1h" }'

Response (example):

{ "success": true, "analysis": { "summary": "Analysis of 42 error logs showing payment processing failures", "key_insights": [ "Payment gateway returned 503 errors for 8 transactions", "Retry mechanism activated in 67% of failed cases" ], "errors": [ "PaymentGatewayError: Service temporarily unavailable (503)" ], "function_calls": ["processPayment()", "retryTransaction()"], "recommendations": [ "Implement circuit breaker for payment gateway", "Add monitoring alerts for gateway health" ] } }

5. Cross-Index Correlation

Track a request across multiple indexes:

# Step 1: Check HTTP layer (envoy-edge) curl -X POST http://localhost:8000/api/set_current_index \ -H "Content-Type: application/json" \ -d '{"index_pattern": "envoy-edge*"}' curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "x_session_id:abc123", "max_results": 50 }' # Step 2: Check backend processing (breeze-v2) curl -X POST http://localhost:8000/api/set_current_index \ -H "Content-Type: application/json" \ -d '{"index_pattern": "breeze-v2*"}' curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{ "query_text": "abc123", "max_results": 200, "sort_order": "asc" }'

πŸ”§ Troubleshooting

Common Issues

1. Timestamp Field Errors

Problem: "No mapping found for [timestamp] in order to sort on"

Solution: The server uses timestamp field which works for all indexes. This error should not occur in v2.0.0.

If you see it:

curl -X POST http://localhost:8000/api/set_config \ -H "Content-Type: application/json" \ -d '{ "configs_to_set": { "elasticsearch.timestamp_field": "@timestamp" } }'

2. Authentication Errors (401)

Problem: "Unauthorized" or "Invalid token"

Solution:

  • Token expired - get a fresh token from browser

  • Re-authenticate using /api/set_auth_token

3. No Results Returned

Checklist:

  1. βœ… Is the correct index set?

  2. βœ… Is the time range correct?

  3. βœ… Try a broader query ("*")

  4. βœ… Check timezone offset

4. Slow Queries

Solutions:

  • Reduce max_results

  • Narrow time range

  • Add specific query terms

  • Check if caching is working (should be faster on repeated queries)

Testing

# Test Kibana connectivity curl -X POST http://localhost:8000/api/search_logs \ -H "Content-Type: application/json" \ -d '{"query_text": "*", "max_results": 1}' # Test Periscope connectivity curl -X GET http://localhost:8000/api/get_periscope_streams

⚑ Performance Features

In-Memory Caching

Automatic caching reduces load on backend systems:

  • Schema Cache: 1 hour TTL (Periscope stream schemas)

  • Search Cache: 5 minutes TTL (recent queries)

Benefits:

  • Faster repeated queries

  • Reduced API calls

  • Lower backend load

HTTP/2 Support

  • Multiplexed connections

  • Faster concurrent requests

  • Better throughput for parallel queries

Connection Pooling

  • Max connections: 200

  • Keepalive connections: 50

  • Efficient connection reuse

  • Reduced latency

OpenTelemetry Tracing

  • Distributed request tracing

  • Performance monitoring

  • Debug distributed issues

  • Track request flow across components

πŸ—οΈ Architecture

Modular Structure

KIBANA_SERVER/ β”œβ”€β”€ main.py # Server entry point β”œβ”€β”€ config.yaml # Configuration β”œβ”€β”€ requirements.txt # Dependencies β”œβ”€β”€ src/ β”‚ β”œβ”€β”€ api/ β”‚ β”‚ β”œβ”€β”€ app.py # FastAPI application β”‚ β”‚ └── http/ β”‚ β”‚ └── routes.py # API endpoints β”‚ β”œβ”€β”€ clients/ β”‚ β”‚ β”œβ”€β”€ kibana_client.py # Kibana API client β”‚ β”‚ β”œβ”€β”€ periscope_client.py # Periscope API client β”‚ β”‚ β”œβ”€β”€ http_manager.py # HTTP/2 + pooling β”‚ β”‚ └── retry_manager.py # Retry logic β”‚ β”œβ”€β”€ services/ β”‚ β”‚ └── log_service.py # Business logic β”‚ β”œβ”€β”€ models/ β”‚ β”‚ β”œβ”€β”€ requests.py # Request models β”‚ β”‚ └── responses.py # Response models β”‚ β”œβ”€β”€ utils/ β”‚ β”‚ └── cache.py # Caching utilities β”‚ β”œβ”€β”€ observability/ β”‚ β”‚ └── tracing.py # OpenTelemetry β”‚ β”œβ”€β”€ security/ β”‚ β”‚ └── sanitizers.py # Input validation β”‚ └── core/ β”‚ β”œβ”€β”€ config.py # Configuration β”‚ β”œβ”€β”€ constants.py # Constants β”‚ └── logging_config.py # Logging └── AI_rules.txt # Generic AI guide

Legacy vs Modular

Feature

Legacy (v1.x)

Modular (v2.0)

Architecture

Monolithic

Modular

Caching

❌ None

βœ… In-memory

HTTP

HTTP/1.1

βœ… HTTP/2

Tracing

❌ None

βœ… OpenTelemetry

Connection Pool

❌ Basic

βœ… Advanced

Timezone Support

⚠️ Manual

βœ… Automatic

Config Management

⚠️ Static

βœ… Dynamic

Error Handling

⚠️ Basic

βœ… Comprehensive

πŸ€– AI Integration

For AI Assistants

Use the provided AI_rules.txt for generic product documentation or AI_rules_file.txt for company-specific usage.

Key Requirements:

  • βœ… Always authenticate first

  • βœ… Discover and set index before searching

  • βœ… Use timestamp field for sorting

  • βœ… Include session_id in queries when tracking sessions

  • βœ… Use ISO timestamps with timezone

Example AI Workflow

  1. Authenticate:

    POST /api/set_auth_token
  2. Discover Indexes:

    GET /api/discover_indexes
  3. Set Index:

    POST /api/set_current_index
  4. Search Logs:

    POST /api/search_logs
  5. Analyze (Optional):

    POST /api/summarize_logs

For complete AI integration instructions, refer to AI_rules.txt (generic) or AI_rules_file.txt (company-specific).

πŸ“š Documentation

  • AI_rules.txt - Generic product usage guide

  • AI_rules_file.txt - Company-specific usage (internal)

  • CONFIG_USAGE_ANALYSIS.md - Configuration reference (deleted, info in this README)

  • KIBANA_INDEXES_COMPLETE_ANALYSIS.md - Index details (deleted, info in this README)

πŸ”„ Migration from v1.x

If upgrading from v1.x:

  1. Update imports: Change from kibana_mcp_server.py to main.py

  2. Update config: Remove unused parameters (see config.yaml)

  3. Update queries: Use timestamp field instead of @timestamp or start_time

  4. Test endpoints: All endpoints remain compatible

  5. Enjoy performance: Automatic caching and HTTP/2 benefits

πŸ“Š Performance Benchmarks

  • Cache Hit Rate: ~80% for repeated queries

  • Response Time: 30-50% faster with HTTP/2

  • Connection Reuse: 90%+ with pooling

  • Memory Usage: <200MB with full cache

🀝 Contributing

This is a proprietary project. For issues or feature requests, contact the maintainers.

πŸ“œ License

This project is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).

License: CC BY-NC-ND 4.0

This license requires that reusers:

  • βœ… Give appropriate credit (Attribution)

  • ❌ Do not use for commercial purposes (NonCommercial)

  • ❌ Do not distribute modified versions (NoDerivatives)

For more information, see the LICENSE file.


Version: 2.0.0 (Modular)
Last Updated: October 2025
Total Logs: 1.3+ Billion
Indexes: 9 (7 active)
Status: Production Ready βœ…

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gaharivatsa/KIBANA_SERVER'

If you have feedback or need assistance with the MCP directory API, please join our Discord server