Skip to main content
Glama

šŸ’¬ LiveKit RAG Assistant v2.0

Enterprise-grade AI semantic search + real-time web integration for LiveKit documentation

šŸŽÆ Features

  • Dual Search: Pinecone docs (3,000+ vectors) + Tavily real-time web

  • Standard MCP: Async LangChain with Model Context Protocol

  • Ultra-Fast: Groq LLM (llama-3.3-70b) sub-5s responses

  • Premium UI: Glassmorphism design with 60+ animations

  • Source Attribution: Full transparency on every answer

šŸš€ Quick Start

# Setup conda create -n langmcp python=3.12 conda activate langmcp pip install -r requirements.txt # Configure .env GROQ_API_KEY=your_key TAVILY_API_KEY=your_key PINECONE_API_KEY=your_key PINECONE_INDEX_NAME=livekit-docs # Terminal 1: Start MCP Server python mcp_server_standard.py # Terminal 2: Start UI streamlit run app.py

App opens at http://localhost:8501

šŸ—ļø Architecture

Streamlit (app.py) → MCP Server → Dual Search: ā”œā”€ Pinecone: Semantic search on embeddings (384-dim) └─ Tavily: Real-time web results ↓ Groq LLM (2048 tokens, temp 0.3) → Response + Sources

šŸ”§ Tech Stack

Layer

Tech

Purpose

Frontend

Streamlit

Premium glassmorphism UI

Backend

MCP Standard

Async subprocess

LLM

Groq API

Ultra-fast inference

Embeddings

HuggingFace

all-MiniLM-L6-v2 (384-dim)

Vector DB

Pinecone

Serverless similarity search

Web Search

Tavily

Real-time internet results

šŸ“š Usage

  1. Choose mode: šŸ“š Docs or ļæ½ Web

  2. Ask naturally: "How do I set up LiveKit?"

  3. Get instant answer with šŸ“„ sources

  4. Copy messages or re-ask from history

⚔ Performance

  • First query: ~15-20s (model load)

  • Cached queries: 2-5s

  • Search latency: <500ms

šŸ› ļø Configuration

GROQ_API_KEY=gsk_*** TAVILY_API_KEY=tvly_*** PINECONE_API_KEY=*** PINECONE_INDEX_NAME=livekit-docs

šŸ”„ Populate Docs

python ingest_docs_quick.py # Creates 3,000+ vector chunks

šŸ“Š Files

  • app.py - Streamlit UI with premium design

  • mcp_server_standard.py - MCP server with tools

  • ingest_docs_quick.py - Document ingestion

  • requirements.txt - Dependencies

  • .env - API keys

🚨 Troubleshooting

Issue

Solution

No results

Try web mode or different keywords

MCP not found

Start mcp_server_standard.py in Terminal 1

Slow first response

Normal (15-20s) - model initializes once

API errors

Verify all keys in .env file

ļæ½ Features

āœ… Real-time chat with 60+ animations āœ… Semantic + keyword hybrid search āœ… Copy-to-clipboard for messages āœ… Recent query suggestions āœ… System status dashboard āœ… Chat history persistence āœ… Query validation + error handling


Version: 2.0 | Status: āœ… Production Ready | Created: November 2025

šŸ‘Øā€šŸ’» By | ļæ½ Open Source | ā¤ļø For Developers

-
security - not tested
F
license - not found
-
quality - not tested

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/THENABILMAN/THENABILMAN_LiveKit_MCP_Assistant'

If you have feedback or need assistance with the MCP directory API, please join our Discord server