# Quick Start Guide
## 1. Install Dependencies
```bash
cd /Users/saimanvithmacbookair/Desktop/Updation_MCP_Local
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install
pip install -e .
```
## 2. Configure Environment
```bash
# Copy template
cp .env.example .env
# Edit .env - MINIMUM required:
# 1. Choose LLM provider
# 2. Set corresponding API key
# 3. Set Updation API URL
```
**Example for OpenAI**:
```bash
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
OPENAI_MODEL=gpt-4o
UPDATION_API_BASE_URL=http://127.0.0.1:8000/api
```
## 3. Test LLM Abstraction
Create `test_llm.py`:
```python
import asyncio
from src.llm import get_llm_provider
async def test():
provider = get_llm_provider()
print(f"✅ Using: {provider.provider_name} - {provider.model_name}")
# Test generation
response = await provider.generate(
messages=[
{"role": "user", "content": "Say hello!"}
]
)
print(f"✅ Response: {response.content}")
print(f"✅ Tokens used: {response.usage['total_tokens']}")
asyncio.run(test())
```
Run:
```bash
python test_llm.py
```
## 4. Switch Providers
**Try Claude**:
```bash
# In .env, change:
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-key-here
```
Run test again - it just works!
## 5. Next Steps
Once remaining components are built:
```bash
# Terminal 1: Start MCP Server
python -m src.mcp_server.server
# Terminal 2: Start Web Chat API
python -m src.web_chat.main
# Terminal 3: Test
curl http://localhost:8002/health
```
## What's Ready Now
✅ Configuration system
✅ LLM abstraction (OpenAI, Claude, Gemini)
✅ Core utilities (envelope, exceptions)
✅ Project structure
## What's Next
⏳ MCP server with tools
⏳ Orchestrator with tool calling
⏳ Web API with FastAPI
⏳ Observability (logging, metrics)
⏳ Redis storage
⏳ Tests
**Let me know when you want me to continue building!**