Integrates Google's Gemini models as a backend for the system's LLM orchestrator to handle tool-based requests.
Interfaces with a Laravel-based backend API to provide tools for managing user subscriptions, bookings, organizational locations, resources, and payment processing.
Enables the use of OpenAI models to process natural language messages and interact with the server's tool registry.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Updation MCPlist all active subscriptions for my organization"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Updation MCP Local Server
Production-grade Model Context Protocol (MCP) server with LLM-agnostic architecture
🌟 Key Features
✅ LLM-Agnostic: Seamlessly switch between OpenAI, Claude, Gemini, or Azure OpenAI
✅ Production-Ready: Structured logging, metrics, error handling, and observability
✅ Scalable: Redis-backed state management for horizontal scaling
✅ Secure: RBAC, rate limiting, input validation, and secret management
✅ Modular: Auto-discovery tool architecture for easy extensibility
✅ Type-Safe: Full Pydantic validation throughout
✅ Resilient: Circuit breakers, retries, and graceful degradation
🏗️ Architecture
┌─────────────────────────────────────────────────────────────┐
│ FastAPI Web Chat API │
│ (Port 8002) │
└────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ LLM Orchestrator │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ LLM Provider Abstraction Layer │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ OpenAI │ │ Claude │ │ Gemini │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
└────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ MCP Server (Port 8050) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Auto-Discovery Tool Registry │ │
│ │ ├── User Tools (subscriptions, bookings, etc.) │ │
│ │ ├── Organization Tools (locations, resources) │ │
│ │ └── Payment Tools │ │
│ └──────────────────────────────────────────────────────┘ │
└────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ External Services │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Updation API │ │ Redis │ │ Prometheus │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘🚀 Quick Start
1. Prerequisites
Python 3.11+
Redis (required for conversation memory - see setup below)
UV package manager (recommended) or pip
2. Installation
# Clone or navigate to project
cd /Users/saimanvithmacbookair/Desktop/Updation_MCP_Local
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -e .
# Or with UV (faster)
uv pip install -e .3. Install Redis (Mac M2)
Redis is required for conversation memory to work!
# Install Redis via Homebrew
brew install redis
# Start Redis (background service)
brew services start redis
# Verify it's running
redis-cli ping # Should return: PONGSee REDIS_SETUP.md for detailed instructions and troubleshooting.
4. Configuration
# Copy environment template
cp .env.example .env
# Edit .env with your actual values
nano .env # or use your favorite editorRequired settings:
# LLM Provider
LLM_PROVIDER=openai
OPENAI_API_KEY=your-key-here
# Laravel API
UPDATION_API_BASE_URL=http://127.0.0.1:8000/api
# Redis (should already be correct)
REDIS_ENABLED=true
REDIS_URL=redis://localhost:6379/0
# Enable auto-reload for development (optional)
WEB_CHAT_RELOAD=true # Auto-restart on code changes5. Run the Services
Terminal 1: MCP Server
source .venv/bin/activate
python -m src.mcp_server.serverTerminal 2: Web Chat API (with auto-reload)
source .venv/bin/activate
python -m src.web_chat.mainNote: With WEB_CHAT_RELOAD=true, Terminal 2 will auto-restart when you edit code!
Terminal 3 (optional): Start metrics server
python -m src.observability.metrics_server
### 6. Test the Setup
**Quick health check:**
```bash
curl http://localhost:8002/healthTest chat with Bearer token:
# Replace with your actual Laravel token
TOKEN="11836|UAc9YiEKc9zO9MvNHKQqY9WwdkxW7qQyw3mqyNK5"
curl -X POST http://localhost:8002/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{"message": "What can I do?"}'Test conversation memory:
# First message
curl -X POST http://localhost:8002/chat \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"message": "My name is John"}'
# Second message (should remember)
curl -X POST http://localhost:8002/chat \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"message": "What is my name?"}'Expected: AI should respond "Your name is John" ✅
Check cache stats:
# User cache (Bearer tokens)
curl http://localhost:8002/cache/stats
# Redis conversation keys
redis-cli keys "conversation:*"See BEARER_TOKEN_AUTH.md for complete API documentation.
📁 Project Structure
Updation_MCP_Local/
├── src/
│ ├── config/ # Configuration management
│ │ ├── __init__.py
│ │ └── settings.py # Pydantic settings with validation
│ │
│ ├── core/ # Core shared utilities
│ │ ├── __init__.py
│ │ ├── envelope.py # Standard response envelope
│ │ ├── exceptions.py # Custom exceptions
│ │ └── security.py # RBAC and auth helpers
│ │
│ ├── llm/ # LLM abstraction layer
│ │ ├── __init__.py
│ │ ├── base.py # Abstract base provider
│ │ ├── openai.py # OpenAI implementation
│ │ ├── anthropic.py # Claude implementation
│ │ ├── google.py # Gemini implementation
│ │ └── factory.py # Provider factory
│ │
│ ├── mcp_server/ # MCP server implementation
│ │ ├── __init__.py
│ │ ├── server.py # Main MCP server
│ │ └── tools/ # Tool modules
│ │ ├── __init__.py # Auto-discovery
│ │ ├── users/ # User-related tools
│ │ ├── organizations/ # Org-related tools
│ │ └── payments/ # Payment tools
│ │
│ ├── orchestrator/ # LLM orchestration
│ │ ├── __init__.py
│ │ ├── client.py # MCP client wrapper
│ │ ├── processor.py # Query processing logic
│ │ └── policy.py # RBAC policies
│ │
│ ├── web_chat/ # FastAPI web interface
│ │ ├── __init__.py
│ │ ├── main.py # FastAPI app
│ │ ├── routes/ # API routes
│ │ ├── middleware/ # Custom middleware
│ │ └── dependencies.py # FastAPI dependencies
│ │
│ ├── observability/ # Logging, metrics, tracing
│ │ ├── __init__.py
│ │ ├── logging.py # Structured logging setup
│ │ ├── metrics.py # Prometheus metrics
│ │ └── tracing.py # Distributed tracing
│ │
│ └── storage/ # State management
│ ├── __init__.py
│ ├── redis_client.py # Redis wrapper
│ └── memory.py # In-memory fallback
│
├── tests/ # Test suite
│ ├── unit/
│ ├── integration/
│ └── e2e/
│
├── scripts/ # Utility scripts
│ ├── setup_redis.sh
│ └── health_check.sh
│
├── .env.example # Environment template
├── .gitignore
├── pyproject.toml # Dependencies
├── README.md
└── docker-compose.yml # Local development stack🔧 Configuration
All configuration is managed through environment variables (see .env.example).
Switching LLM Providers
Simply change the LLM_PROVIDER environment variable:
# Use OpenAI
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...
# Use Claude
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Use Gemini
LLM_PROVIDER=google
GOOGLE_API_KEY=...No code changes required! The system automatically routes to the correct provider.
🛠️ Development
Running Tests
# Install dev dependencies
pip install -e ".[dev]"
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test file
pytest tests/unit/test_llm_providers.pyCode Quality
# Format code
ruff format .
# Lint
ruff check .
# Type checking
mypy src/📊 Monitoring
Metrics
Prometheus metrics available at http://localhost:9090/metrics:
mcp_requests_total- Total requests by tool and statusmcp_request_duration_seconds- Request latency histogrammcp_active_connections- Current active connectionsllm_api_calls_total- LLM API calls by providerllm_tokens_used_total- Token usage tracking
Logs
Structured JSON logs with trace IDs for correlation:
{
"timestamp": "2024-01-15T10:30:00Z",
"level": "info",
"event": "tool_executed",
"tool_name": "get_user_subscriptions",
"user_id": 123,
"duration_ms": 245,
"trace_id": "abc-123-def"
}🔒 Security
RBAC: Role-based access control for all tools
Rate Limiting: Per-user and global rate limits
Input Validation: Pydantic schemas for all inputs
Secret Management: Never log or expose API keys
CORS: Configurable allowed origins
HTTPS: Enforce HTTPS in production
🚢 Deployment
Docker
docker build -t updation-mcp:latest .
docker run -p 8050:8050 -p 8002:8002 --env-file .env updation-mcp:latestDocker Compose
docker-compose up -d📝 Adding New Tools
Create tool module in
src/mcp_server/tools/your_domain/Implement
tool.pywithregister(mcp)functionAdd schemas in
schemas.pyAdd business logic in
service.pyAuto-discovery handles the rest!
Example:
# src/mcp_server/tools/your_domain/tool.py
from mcp.server.fastmcp import FastMCP
def register(mcp: FastMCP) -> None:
@mcp.tool()
async def your_tool(param: str):
\"\"\"Tool description for LLM.\"\"\"
return {"result": "data"}🤝 Contributing
Fork the repository
Create a feature branch
Make your changes with tests
Run quality checks:
ruff check . && pytestSubmit a pull request
📄 License
[Your License Here]
🆘 Support
For issues or questions:
GitHub Issues: [Your Repo]
Email: [Your Email]
Docs: [Your Docs URL]
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.