Integrates Google's Gemini models as a backend for the system's LLM orchestrator to handle tool-based requests.
Interfaces with a Laravel-based backend API to provide tools for managing user subscriptions, bookings, organizational locations, resources, and payment processing.
Enables the use of OpenAI models to process natural language messages and interact with the server's tool registry.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Updation MCPlist all active subscriptions for my organization"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Updation MCP Local Server
Production-grade Model Context Protocol (MCP) server with LLM-agnostic architecture
π Key Features
β LLM-Agnostic: Seamlessly switch between OpenAI, Claude, Gemini, or Azure OpenAI
β Production-Ready: Structured logging, metrics, error handling, and observability
β Scalable: Redis-backed state management for horizontal scaling
β Secure: RBAC, rate limiting, input validation, and secret management
β Modular: Auto-discovery tool architecture for easy extensibility
β Type-Safe: Full Pydantic validation throughout
β Resilient: Circuit breakers, retries, and graceful degradation
ποΈ Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FastAPI Web Chat API β
β (Port 8002) β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LLM Orchestrator β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β LLM Provider Abstraction Layer β β
β β ββββββββββββ ββββββββββββ ββββββββββββ β β
β β β OpenAI β β Claude β β Gemini β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP Server (Port 8050) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Auto-Discovery Tool Registry β β
β β βββ User Tools (subscriptions, bookings, etc.) β β
β β βββ Organization Tools (locations, resources) β β
β β βββ Payment Tools β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β External Services β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Updation API β β Redis β β Prometheus β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββπ Quick Start
1. Prerequisites
Python 3.11+
Redis (required for conversation memory - see setup below)
UV package manager (recommended) or pip
2. Installation
# Clone or navigate to project
cd /Users/saimanvithmacbookair/Desktop/Updation_MCP_Local
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -e .
# Or with UV (faster)
uv pip install -e .3. Install Redis (Mac M2)
Redis is required for conversation memory to work!
# Install Redis via Homebrew
brew install redis
# Start Redis (background service)
brew services start redis
# Verify it's running
redis-cli ping # Should return: PONGSee
4. Configuration
# Copy environment template
cp .env.example .env
# Edit .env with your actual values
nano .env # or use your favorite editorRequired settings:
# LLM Provider
LLM_PROVIDER=openai
OPENAI_API_KEY=your-key-here
# Laravel API
UPDATION_API_BASE_URL=http://127.0.0.1:8000/api
# Redis (should already be correct)
REDIS_ENABLED=true
REDIS_URL=redis://localhost:6379/0
# Enable auto-reload for development (optional)
WEB_CHAT_RELOAD=true # Auto-restart on code changes5. Run the Services
Terminal 1: MCP Server
source .venv/bin/activate
python -m src.mcp_server.serverTerminal 2: Web Chat API (with auto-reload)
source .venv/bin/activate
python -m src.web_chat.mainNote: With WEB_CHAT_RELOAD=true, Terminal 2 will auto-restart when you edit code!
Terminal 3 (optional): Start metrics server
python -m src.observability.metrics_server
### 6. Test the Setup
**Quick health check:**
```bash
curl http://localhost:8002/healthTest chat with Bearer token:
# Replace with your actual Laravel token
TOKEN="11836|UAc9YiEKc9zO9MvNHKQqY9WwdkxW7qQyw3mqyNK5"
curl -X POST http://localhost:8002/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{"message": "What can I do?"}'Test conversation memory:
# First message
curl -X POST http://localhost:8002/chat \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"message": "My name is John"}'
# Second message (should remember)
curl -X POST http://localhost:8002/chat \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"message": "What is my name?"}'Expected: AI should respond "Your name is John" β
Check cache stats:
# User cache (Bearer tokens)
curl http://localhost:8002/cache/stats
# Redis conversation keys
redis-cli keys "conversation:*"See
π Project Structure
Updation_MCP_Local/
βββ src/
β βββ config/ # Configuration management
β β βββ __init__.py
β β βββ settings.py # Pydantic settings with validation
β β
β βββ core/ # Core shared utilities
β β βββ __init__.py
β β βββ envelope.py # Standard response envelope
β β βββ exceptions.py # Custom exceptions
β β βββ security.py # RBAC and auth helpers
β β
β βββ llm/ # LLM abstraction layer
β β βββ __init__.py
β β βββ base.py # Abstract base provider
β β βββ openai.py # OpenAI implementation
β β βββ anthropic.py # Claude implementation
β β βββ google.py # Gemini implementation
β β βββ factory.py # Provider factory
β β
β βββ mcp_server/ # MCP server implementation
β β βββ __init__.py
β β βββ server.py # Main MCP server
β β βββ tools/ # Tool modules
β β βββ __init__.py # Auto-discovery
β β βββ users/ # User-related tools
β β βββ organizations/ # Org-related tools
β β βββ payments/ # Payment tools
β β
β βββ orchestrator/ # LLM orchestration
β β βββ __init__.py
β β βββ client.py # MCP client wrapper
β β βββ processor.py # Query processing logic
β β βββ policy.py # RBAC policies
β β
β βββ web_chat/ # FastAPI web interface
β β βββ __init__.py
β β βββ main.py # FastAPI app
β β βββ routes/ # API routes
β β βββ middleware/ # Custom middleware
β β βββ dependencies.py # FastAPI dependencies
β β
β βββ observability/ # Logging, metrics, tracing
β β βββ __init__.py
β β βββ logging.py # Structured logging setup
β β βββ metrics.py # Prometheus metrics
β β βββ tracing.py # Distributed tracing
β β
β βββ storage/ # State management
β βββ __init__.py
β βββ redis_client.py # Redis wrapper
β βββ memory.py # In-memory fallback
β
βββ tests/ # Test suite
β βββ unit/
β βββ integration/
β βββ e2e/
β
βββ scripts/ # Utility scripts
β βββ setup_redis.sh
β βββ health_check.sh
β
βββ .env.example # Environment template
βββ .gitignore
βββ pyproject.toml # Dependencies
βββ README.md
βββ docker-compose.yml # Local development stackπ§ Configuration
All configuration is managed through environment variables (see .env.example).
Switching LLM Providers
Simply change the LLM_PROVIDER environment variable:
# Use OpenAI
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...
# Use Claude
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Use Gemini
LLM_PROVIDER=google
GOOGLE_API_KEY=...No code changes required! The system automatically routes to the correct provider.
π οΈ Development
Running Tests
# Install dev dependencies
pip install -e ".[dev]"
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test file
pytest tests/unit/test_llm_providers.pyCode Quality
# Format code
ruff format .
# Lint
ruff check .
# Type checking
mypy src/π Monitoring
Metrics
Prometheus metrics available at http://localhost:9090/metrics:
mcp_requests_total- Total requests by tool and statusmcp_request_duration_seconds- Request latency histogrammcp_active_connections- Current active connectionsllm_api_calls_total- LLM API calls by providerllm_tokens_used_total- Token usage tracking
Logs
Structured JSON logs with trace IDs for correlation:
{
"timestamp": "2024-01-15T10:30:00Z",
"level": "info",
"event": "tool_executed",
"tool_name": "get_user_subscriptions",
"user_id": 123,
"duration_ms": 245,
"trace_id": "abc-123-def"
}π Security
RBAC: Role-based access control for all tools
Rate Limiting: Per-user and global rate limits
Input Validation: Pydantic schemas for all inputs
Secret Management: Never log or expose API keys
CORS: Configurable allowed origins
HTTPS: Enforce HTTPS in production
π’ Deployment
Docker
docker build -t updation-mcp:latest .
docker run -p 8050:8050 -p 8002:8002 --env-file .env updation-mcp:latestDocker Compose
docker-compose up -dπ Adding New Tools
Create tool module in
src/mcp_server/tools/your_domain/Implement
tool.pywithregister(mcp)functionAdd schemas in
schemas.pyAdd business logic in
service.pyAuto-discovery handles the rest!
Example:
# src/mcp_server/tools/your_domain/tool.py
from mcp.server.fastmcp import FastMCP
def register(mcp: FastMCP) -> None:
@mcp.tool()
async def your_tool(param: str):
\"\"\"Tool description for LLM.\"\"\"
return {"result": "data"}π€ Contributing
Fork the repository
Create a feature branch
Make your changes with tests
Run quality checks:
ruff check . && pytestSubmit a pull request
π License
[Your License Here]
π Support
For issues or questions:
GitHub Issues: [Your Repo]
Email: [Your Email]
Docs: [Your Docs URL]