Skip to main content
Glama

Updation MCP Local Server

Production-grade Model Context Protocol (MCP) server with LLM-agnostic architecture

🌟 Key Features

  • βœ… LLM-Agnostic: Seamlessly switch between OpenAI, Claude, Gemini, or Azure OpenAI

  • βœ… Production-Ready: Structured logging, metrics, error handling, and observability

  • βœ… Scalable: Redis-backed state management for horizontal scaling

  • βœ… Secure: RBAC, rate limiting, input validation, and secret management

  • βœ… Modular: Auto-discovery tool architecture for easy extensibility

  • βœ… Type-Safe: Full Pydantic validation throughout

  • βœ… Resilient: Circuit breakers, retries, and graceful degradation

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ FastAPI Web Chat API β”‚ β”‚ (Port 8002) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ LLM Orchestrator β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ LLM Provider Abstraction Layer β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ β”‚ β”‚ OpenAI β”‚ β”‚ Claude β”‚ β”‚ Gemini β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ MCP Server (Port 8050) β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Auto-Discovery Tool Registry β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ User Tools (subscriptions, bookings, etc.) β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ Organization Tools (locations, resources) β”‚ β”‚ β”‚ β”‚ └── Payment Tools β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ External Services β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Updation API β”‚ β”‚ Redis β”‚ β”‚ Prometheus β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

1. Prerequisites

  • Python 3.11+

  • Redis (required for conversation memory - see setup below)

  • UV package manager (recommended) or pip

2. Installation

# Clone or navigate to project cd /Users/saimanvithmacbookair/Desktop/Updation_MCP_Local # Create virtual environment python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate # Install dependencies pip install -e . # Or with UV (faster) uv pip install -e .

3. Install Redis (Mac M2)

Redis is required for conversation memory to work!

# Install Redis via Homebrew brew install redis # Start Redis (background service) brew services start redis # Verify it's running redis-cli ping # Should return: PONG

See

4. Configuration

# Copy environment template cp .env.example .env # Edit .env with your actual values nano .env # or use your favorite editor

Required settings:

# LLM Provider LLM_PROVIDER=openai OPENAI_API_KEY=your-key-here # Laravel API UPDATION_API_BASE_URL=http://127.0.0.1:8000/api # Redis (should already be correct) REDIS_ENABLED=true REDIS_URL=redis://localhost:6379/0 # Enable auto-reload for development (optional) WEB_CHAT_RELOAD=true # Auto-restart on code changes

5. Run the Services

Terminal 1: MCP Server

source .venv/bin/activate python -m src.mcp_server.server

Terminal 2: Web Chat API (with auto-reload)

source .venv/bin/activate python -m src.web_chat.main

Note: With WEB_CHAT_RELOAD=true, Terminal 2 will auto-restart when you edit code!

Terminal 3 (optional): Start metrics server

python -m src.observability.metrics_server

### 6. Test the Setup **Quick health check:** ```bash curl http://localhost:8002/health

Test chat with Bearer token:

# Replace with your actual Laravel token TOKEN="11836|UAc9YiEKc9zO9MvNHKQqY9WwdkxW7qQyw3mqyNK5" curl -X POST http://localhost:8002/chat \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d '{"message": "What can I do?"}'

Test conversation memory:

# First message curl -X POST http://localhost:8002/chat \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{"message": "My name is John"}' # Second message (should remember) curl -X POST http://localhost:8002/chat \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{"message": "What is my name?"}'

Expected: AI should respond "Your name is John" βœ…

Check cache stats:

# User cache (Bearer tokens) curl http://localhost:8002/cache/stats # Redis conversation keys redis-cli keys "conversation:*"

See

πŸ“ Project Structure

Updation_MCP_Local/ β”œβ”€β”€ src/ β”‚ β”œβ”€β”€ config/ # Configuration management β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ └── settings.py # Pydantic settings with validation β”‚ β”‚ β”‚ β”œβ”€β”€ core/ # Core shared utilities β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ envelope.py # Standard response envelope β”‚ β”‚ β”œβ”€β”€ exceptions.py # Custom exceptions β”‚ β”‚ └── security.py # RBAC and auth helpers β”‚ β”‚ β”‚ β”œβ”€β”€ llm/ # LLM abstraction layer β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ base.py # Abstract base provider β”‚ β”‚ β”œβ”€β”€ openai.py # OpenAI implementation β”‚ β”‚ β”œβ”€β”€ anthropic.py # Claude implementation β”‚ β”‚ β”œβ”€β”€ google.py # Gemini implementation β”‚ β”‚ └── factory.py # Provider factory β”‚ β”‚ β”‚ β”œβ”€β”€ mcp_server/ # MCP server implementation β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ server.py # Main MCP server β”‚ β”‚ └── tools/ # Tool modules β”‚ β”‚ β”œβ”€β”€ __init__.py # Auto-discovery β”‚ β”‚ β”œβ”€β”€ users/ # User-related tools β”‚ β”‚ β”œβ”€β”€ organizations/ # Org-related tools β”‚ β”‚ └── payments/ # Payment tools β”‚ β”‚ β”‚ β”œβ”€β”€ orchestrator/ # LLM orchestration β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ client.py # MCP client wrapper β”‚ β”‚ β”œβ”€β”€ processor.py # Query processing logic β”‚ β”‚ └── policy.py # RBAC policies β”‚ β”‚ β”‚ β”œβ”€β”€ web_chat/ # FastAPI web interface β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ main.py # FastAPI app β”‚ β”‚ β”œβ”€β”€ routes/ # API routes β”‚ β”‚ β”œβ”€β”€ middleware/ # Custom middleware β”‚ β”‚ └── dependencies.py # FastAPI dependencies β”‚ β”‚ β”‚ β”œβ”€β”€ observability/ # Logging, metrics, tracing β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ logging.py # Structured logging setup β”‚ β”‚ β”œβ”€β”€ metrics.py # Prometheus metrics β”‚ β”‚ └── tracing.py # Distributed tracing β”‚ β”‚ β”‚ └── storage/ # State management β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ redis_client.py # Redis wrapper β”‚ └── memory.py # In-memory fallback β”‚ β”œβ”€β”€ tests/ # Test suite β”‚ β”œβ”€β”€ unit/ β”‚ β”œβ”€β”€ integration/ β”‚ └── e2e/ β”‚ β”œβ”€β”€ scripts/ # Utility scripts β”‚ β”œβ”€β”€ setup_redis.sh β”‚ └── health_check.sh β”‚ β”œβ”€β”€ .env.example # Environment template β”œβ”€β”€ .gitignore β”œβ”€β”€ pyproject.toml # Dependencies β”œβ”€β”€ README.md └── docker-compose.yml # Local development stack

πŸ”§ Configuration

All configuration is managed through environment variables (see .env.example).

Switching LLM Providers

Simply change the LLM_PROVIDER environment variable:

# Use OpenAI LLM_PROVIDER=openai OPENAI_API_KEY=sk-... # Use Claude LLM_PROVIDER=anthropic ANTHROPIC_API_KEY=sk-ant-... # Use Gemini LLM_PROVIDER=google GOOGLE_API_KEY=...

No code changes required! The system automatically routes to the correct provider.

πŸ› οΈ Development

Running Tests

# Install dev dependencies pip install -e ".[dev]" # Run all tests pytest # Run with coverage pytest --cov=src --cov-report=html # Run specific test file pytest tests/unit/test_llm_providers.py

Code Quality

# Format code ruff format . # Lint ruff check . # Type checking mypy src/

πŸ“Š Monitoring

Metrics

Prometheus metrics available at http://localhost:9090/metrics:

  • mcp_requests_total - Total requests by tool and status

  • mcp_request_duration_seconds - Request latency histogram

  • mcp_active_connections - Current active connections

  • llm_api_calls_total - LLM API calls by provider

  • llm_tokens_used_total - Token usage tracking

Logs

Structured JSON logs with trace IDs for correlation:

{ "timestamp": "2024-01-15T10:30:00Z", "level": "info", "event": "tool_executed", "tool_name": "get_user_subscriptions", "user_id": 123, "duration_ms": 245, "trace_id": "abc-123-def" }

πŸ”’ Security

  • RBAC: Role-based access control for all tools

  • Rate Limiting: Per-user and global rate limits

  • Input Validation: Pydantic schemas for all inputs

  • Secret Management: Never log or expose API keys

  • CORS: Configurable allowed origins

  • HTTPS: Enforce HTTPS in production

🚒 Deployment

Docker

docker build -t updation-mcp:latest . docker run -p 8050:8050 -p 8002:8002 --env-file .env updation-mcp:latest

Docker Compose

docker-compose up -d

πŸ“ Adding New Tools

  1. Create tool module in src/mcp_server/tools/your_domain/

  2. Implement tool.py with register(mcp) function

  3. Add schemas in schemas.py

  4. Add business logic in service.py

  5. Auto-discovery handles the rest!

Example:

# src/mcp_server/tools/your_domain/tool.py from mcp.server.fastmcp import FastMCP def register(mcp: FastMCP) -> None: @mcp.tool() async def your_tool(param: str): \"\"\"Tool description for LLM.\"\"\" return {"result": "data"}

🀝 Contributing

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes with tests

  4. Run quality checks: ruff check . && pytest

  5. Submit a pull request

πŸ“„ License

[Your License Here]

πŸ†˜ Support

For issues or questions:

  • GitHub Issues: [Your Repo]

  • Email: [Your Email]

  • Docs: [Your Docs URL]

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sai-Manvith/Updation_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server