Skip to main content
Glama

πŸ’¬ LiveKit RAG Assistant v2.0

Enterprise-grade AI semantic search + real-time web integration for LiveKit documentation

🎯 Features

  • Dual Search: Pinecone docs (3,000+ vectors) + Tavily real-time web

  • Standard MCP: Async LangChain with Model Context Protocol

  • Ultra-Fast: Groq LLM (llama-3.3-70b) sub-5s responses

  • Premium UI: Glassmorphism design with 60+ animations

  • Source Attribution: Full transparency on every answer

πŸš€ Quick Start

# Setup conda create -n langmcp python=3.12 conda activate langmcp pip install -r requirements.txt # Configure .env GROQ_API_KEY=your_key TAVILY_API_KEY=your_key PINECONE_API_KEY=your_key PINECONE_INDEX_NAME=livekit-docs # Terminal 1: Start MCP Server python mcp_server_standard.py # Terminal 2: Start UI streamlit run app.py

App opens at http://localhost:8501

πŸ—οΈ Architecture

Streamlit (app.py) β†’ MCP Server β†’ Dual Search: β”œβ”€ Pinecone: Semantic search on embeddings (384-dim) └─ Tavily: Real-time web results ↓ Groq LLM (2048 tokens, temp 0.3) β†’ Response + Sources

πŸ”§ Tech Stack

Layer

Tech

Purpose

Frontend

Streamlit

Premium glassmorphism UI

Backend

MCP Standard

Async subprocess

LLM

Groq API

Ultra-fast inference

Embeddings

HuggingFace

all-MiniLM-L6-v2 (384-dim)

Vector DB

Pinecone

Serverless similarity search

Web Search

Tavily

Real-time internet results

πŸ“š Usage

  1. Choose mode: πŸ“š Docs or οΏ½ Web

  2. Ask naturally: "How do I set up LiveKit?"

  3. Get instant answer with πŸ“„ sources

  4. Copy messages or re-ask from history

⚑ Performance

  • First query: ~15-20s (model load)

  • Cached queries: 2-5s

  • Search latency: <500ms

πŸ› οΈ Configuration

GROQ_API_KEY=gsk_*** TAVILY_API_KEY=tvly_*** PINECONE_API_KEY=*** PINECONE_INDEX_NAME=livekit-docs

πŸ”„ Populate Docs

python ingest_docs_quick.py # Creates 3,000+ vector chunks

πŸ“Š Files

  • app.py - Streamlit UI with premium design

  • mcp_server_standard.py - MCP server with tools

  • ingest_docs_quick.py - Document ingestion

  • requirements.txt - Dependencies

  • .env - API keys

🚨 Troubleshooting

Issue

Solution

No results

Try web mode or different keywords

MCP not found

Start mcp_server_standard.py in Terminal 1

Slow first response

Normal (15-20s) - model initializes once

API errors

Verify all keys in .env file

οΏ½ Features

βœ… Real-time chat with 60+ animations βœ… Semantic + keyword hybrid search βœ… Copy-to-clipboard for messages βœ… Recent query suggestions βœ… System status dashboard βœ… Chat history persistence βœ… Query validation + error handling


Version: 2.0 | Status: βœ… Production Ready | Created: November 2025

πŸ‘¨β€πŸ’» By | οΏ½ Open Source | ❀️ For Developers

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/THENABILMAN/THENABILMAN_LiveKit_MCP_Assistant'

If you have feedback or need assistance with the MCP directory API, please join our Discord server