Skip to main content
Glama

Conversation Search MCP Server

Conversation Search MCP Server

Version: 1.1.0
Status: Production Ready
Last Updated: 2025-01-07

Overview

Advanced MCP server providing semantic and traditional search capabilities across Claude Code conversation history. Features vector embeddings, hybrid search, and comprehensive conversation management tools.

πŸš€ Key Features

Search Capabilities

  • Traditional Search: Fast FTS-based keyword search with session indexing

  • Vector Search: Semantic similarity using OpenAI embeddings

  • Hybrid Search: Combined semantic + keyword matching for optimal results

  • Context Retrieval: Adjacent chunk expansion for full conversation context

Conversation Management

  • Recent Conversations: Optimized retrieval with project filtering

  • Session Details: Full conversation history with message threading

  • Auto-Naming: AI-powered conversation title generation

  • Batch Operations: Bulk renaming and processing capabilities

Database Operations

  • Incremental Updates: Process only new conversations since last run

  • Full Migration: Complete conversation database rebuild

  • Statistics: Comprehensive indexing and usage metrics

  • Vector Migration: One-time embedding generation for existing conversations

πŸ“Š Current Scale

  • Conversations: 664 processed sessions

  • Messages: 118,453+ indexed messages

  • Vector Chunks: 13,847 semantic chunks

  • Database Size: ~420MB optimized storage

  • Embedding Cost: ~$0.57 (one-time migration)

πŸ› οΈ Technical Stack

  • Runtime: Node.js with TypeScript

  • Database: SQLite with FTS and vector extensions

  • Embeddings: OpenAI text-embedding-3-small

  • Protocol: Model Context Protocol (MCP)

  • Search: Hybrid semantic + keyword matching

πŸ”’ Security Configuration

Environment Variables Setup

  1. Copy the environment template:

    cp .env.example .env
  2. Configure your API key:

    # Edit .env and add your OpenAI API key OPENAI_API_KEY=your_actual_api_key_here

Security Best Practices

  • βœ… Environment Variables: All sensitive data is configured via environment variables

  • βœ… No Hardcoded Secrets: API keys are never committed to version control

  • βœ… Secure Defaults: Vector search gracefully degrades without API key

  • βœ… Read-Only Access: OpenAI API is used only for text embedding generation

  • βœ… Local Processing: All conversation data remains on your system

  • βœ… Cost Control: Built-in token estimation and cost tracking

API Key Management

  • Required For: Vector search, semantic search, AI-powered naming

  • Not Required For: Traditional keyword search, conversation management

  • Permissions: Read-only access to OpenAI embeddings API

  • Cost: ~$0.0001 per 1,000 tokens (very low cost for typical usage)

  • Rate Limits: Automatic batching and retry logic included

Data Privacy

  • Local Storage: All conversation data stored locally in SQLite

  • No Data Sharing: Conversations never sent to external services except for embedding generation

  • User Control: Vector search entirely optional and user-controlled

  • Audit Trail: All API usage logged with token counts and costs

⚑ Quick Start

Prerequisites

# 1. Copy and configure environment variables cp .env.example .env # Edit .env with your OpenAI API key (optional) # 2. Install dependencies npm install

Build and Run

# Build the server npm run build # Test direct communication echo '{"jsonrpc": "2.0", "method": "tools/list", "id": 1}' | node dist/src/index.js

MCP Integration

Add to your Claude Code configuration:

{ "conversation-search": { "type": "stdio", "command": "node", "args": ["/path/to/conversation-search/dist/src/index.js"], "env": {} } }

πŸ” Available Tools

Traditional Search

  • search_conversations - Keyword search with role filtering

  • get_recent_conversations - Latest conversations with project filtering

  • get_conversation_details - Full session message history

  • get_session_for_resume - Resume-formatted conversation data

Vector Search (Requires OpenAI API Key)

  • vector_search_conversations - Semantic similarity search

  • hybrid_search_conversations - Combined semantic + keyword search

  • get_chunk_with_context - Expand search results with adjacent chunks

Management Tools

  • rename_conversation - Assign custom conversation names

  • generate_conversation_summary - AI-powered title generation

  • list_conversations_with_names - Named conversation listing

  • batch_rename_recent - Bulk conversation naming

Database Operations

  • update_database - Full conversation database rebuild

  • update_database_incremental - Process only new conversations

  • get_indexing_stats - Database statistics and health metrics

  • migrate_to_vector_database - One-time vector embedding migration

πŸ“– Documentation

🎯 Performance

  • Search Speed: Sub-second response for most queries

  • Memory Efficient: SQLite-based storage with optimized indexes

  • Scalable: Handles 100K+ messages with consistent performance

  • Graceful Degradation: Traditional search works without OpenAI API key

πŸ”§ Monitoring

Check server health:

# Get comprehensive statistics echo '{"jsonrpc": "2.0", "method": "tools/call", "params": {"name": "get_indexing_stats"}, "id": 1}' | node dist/src/index.js

Expected output includes traditional and vector database metrics, processing dates, and configuration status.

πŸ“ License

Private development tool - not for redistribution.

-
security - not tested
-
license - not tested
-
quality - not tested

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cordlesssteve/conversation-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server