# π§ MCP Server (Model Compute Paradigm)
A modular, production-ready FastAPI server built to route and orchestrate multiple AI/LLM-powered models behind a unified, scalable interface. It supports **streaming chat**, **LLM-based routing**, and **multi-model pipelines** (like analyze β summarize β recommend) β all asynchronously and fully Dockerized.
---
## π― Project Score (Production Readiness)
| Capability | Status | Details |
|-------------------------------|------------|-------------------------------------------------------------------------------|
| π§ Multi-Model Orchestration | β
Complete | Dynamic routing between `chat`, `summarize`, `sentiment`, `recommend` |
| π€ LLM-Based Task Router | β
Complete | GPT-powered routing via `"auto"` task type |
| π Async FastAPI + Concurrency | β
Complete | Async/await + concurrent task execution with simulated/model API delays |
| π GPT Streaming Support | β
Complete | `text/event-stream` chunked responses for chat endpoints |
| π§ͺ Unit + Mocked API Tests | β
Complete | Pytest-based test suite with mocked `run()` responses |
| π³ Dockerized + Clean Layout | β
Complete | Python 3.13 base image, no Conda dependency, production-ready Dockerfile |
| π¦ Metadata-Driven Registry | β
Complete | Model metadata loaded from external YAML config |
| π Rate Limiting & Retry | β³ In Progress | Handles 429 retry loop; rate limiting controls WIP |
| π§ͺ CI + Docs | β³ Next | GitHub Actions + Swagger/Redoc planned |
---
## π§© Why This Project? (Motivation)
Modern ML/LLM deployments often involve:
- Multiple task types and model backends (OpenAI, HF, local, REST)
- Routing decisions based on input intent
- Combining outputs of multiple models (e.g., `summarize` + `recommend`)
- Handling 429 retries, async concurrency, streaming responses
π§ However, building such an **LLM backend API server** that is:
- Async + concurrent
- Streamable
- Pluggable (via metadata)
- Testable
- Dockerized
β¦ is **non-trivial** and not easily found in one single place.
---
## π‘ What Weβve Built (Solution)
This repo is a **production-ready PoC** of an MCP (Model-Compute Paradigm) architecture:
- β
**FastAPI-based microserver** to handle multiple tasks via `/task` endpoint
- β
Task router that can:
- π Dispatch to specific model types (`chat`, `sentiment`, `summarize`, `recommend`)
- π€ Use an LLM to infer which task to run (`auto`)
- π§ Run multiple models in sequence (`analyze`)
- β
GPT streaming via `text/event-stream`
- β
Async/await enabled architecture for concurrency
- β
Clean modular code for easy extension
- β
Dockerized for deployment
- β
Tested using Pytest with mocking
---
## π οΈ Use Cases
| Use Case | MCP Server Support |
|----------------------------------------|-----------------------------------------------|
| Build your own ChatGPT-style API | β
`chat` task with streaming |
| Build intelligent task router | β
`auto` task with GPT-powered intent parsing |
| Build AI pipelines (like RAG/RL) | β
`analyze` task with sequential execution |
| Swap between OpenAI/HuggingFace APIs | β
Via `model_registry.yaml` config |
| Add custom models (e.g., OCR, vision) | β
Just add a new module + registry entry |
---
## π Features
- β
**Async FastAPI** server
- π§ **Task-based Model Routing** (`chat`, `sentiment`, `recommender`, `summarize`)
- π **Model Registry** from YAML/JSON
- π **Automatic Retry** and **Rate Limit Handling** for APIs
- π **Streaming Responses** for Chat
- π§ͺ **Unit Tests + Mocked API Calls**
- π³ **Dockerized** for production deployment
- π¦ Modular structure, ready for CI/CD
---
## π Architecture Overview
```plaintext
ββββββββββββββ
β Frontend β
βββββββ¬βββββββ
β
βΌ
ββββββββββββββ YAML/JSON
β FastAPI βββββββ Model Registry
β Server β β
βββββββ¬βββββββ βΌ
ββββββββββββββββΌβββββββββββββββ
β β β
βΌ βΌ βΌ
[chat] [sentiment] [recommender]
GPT-4 HF pipeline stub logic / API
---
π Setup
π¦ Install dependencies
git clone https://github.com/YOUR_USERNAME/mcp-server.git
cd mcp-server
---
# Optional: create virtualenv
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
or
conda create -n <env_name>
conda activate <env_name>
pip install -r requirements.txt
βΆοΈ Run the server
uvicorn app:app --reload
Access the docs at: http://localhost:8000/docs
π§ͺ Running Tests
pytest tests/
Unit tests mock external API calls using unittest.mock.AsyncMock.
π³ Docker Support
π¨ Build image
docker build -t mcp-server .
π Run container
docker run -p 8000:8000 mcp-server
π§° Example API Request
curl -X POST http://localhost:8000/task \
-H "Content-Type: application/json" \
-d '{
"type": "chat",
"input": "What are the benefits of restorative yoga?"
}'
π Directory Structure
mcp/
βββ app.py # FastAPI entry
βββ models/ # ML models (chat, sentiment, etc.)
βββ agent/
β βββ task_router.py # Task router
β βββ model_registry.py # Registry loader
βββ registry/models.yaml # YAML registry of model metadata
βββ tests/ # Unit tests
βββ Dockerfile
βββ requirements.txt
βββ README.md
βββ .env / .gitignore
π€ Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what youβd like to change.
π License
MIT
β¨ Author
Built by Sriram Kumar Reddy Challa