Enables containerized deployment of the MCP server with Docker and Docker Compose, allowing for easier setup and distribution across machines
Powers the backend API server that handles routing user questions to appropriate specialized agents
Integrates with Ollama to access the deepseek-r1:7b language model for expert responses when specialized agents cannot handle a query
Provides a simple web interface for interacting with the MCP server, allowing users to input questions and receive responses
🧠 MCP: Multi-Agent Control Point
This project implements a multi-agent server that routes user queries to an LLM model or specialized agents (such as date, location, weather, or a technical expert). It includes a simple web interface built with Streamlit for ease of use.
🚀 Features
- 🌐 Backend with FastAPI
- 🧠 Specialized agents (date, location, weather, LLM expert)
- 🧩 Extensible and modular agent system with inheritance
- ⚙️ Common inheritance
AgenteBase
for uniform error and response handling - 🤖 Smart logic for agents to collaborate with each other
- 🖥️ Visual interface with Streamlit (GUI)
- 🐳 Docker containers for easy deployment
- 🔌 Client-server communication ready for local or remote network
📁 Project structure
⚙️ Requirements
🧪 Quick installation
1. Clone the repository
2. Create configuration file for Streamlit
Inside the gui
directory, create the file:
With the following content:
3. Run with Docker Compose
This will build and lift two containers:
- Backend at
http://localhost:8000
- Graphical interface at
http://localhost:8501
🌍 Access from another machine (optional)
- Make sure you expose the ports correctly (
8000
,8501
). - Use the server machine's IP instead of
localhost
insecrets.toml
. - You can also set up custom Docker networks for cross-host access.
📦 For production
You can run only the backend if you want to integrate it with another interface:
✨ Example of use
In the web interface, you can type questions like:
¿Qué día es hoy?
¿Dónde estoy?
¿Qué clima hace?
Explícame qué es Python
The app will decide whether to answer the question directly or delegate it to an agent.
🛠️ Agents available
Agent | Function |
---|---|
DATE | Returns the current date and time |
LOCATION | Detects the city and country by IP |
CLIMATE | Returns the weather at the current location |
🔄 Interaction between agents
The weather agent now directly uses the location agent to determine geographic coordinates ( lat
, lon
) and city before querying the weather, allowing for responses tailored to the user's actual location. This improves modularity and collaboration between agents.
🧩 How to create a new agent
- Create a class that inherits from AgenteBase:
- Specifies patterns to detect relevant questions.
- Implements
agente()
which returns a dict with the key success and data or error. - The agent will automatically use the indicated LLM to generate natural responses based on your data.
⚠️ Important technical notes
- All agents inherit from AgenteBase, which manages:
- Standard errors
- Converting data to natural response via LLM
- The agent() method must return a structured dictionary.
- Each agent specifies which LLM model to use (
llm_simple
orllm_experto
).
📄 License
This project is licensed under the MIT License.
🙋♂️ Author
Developed by Alejandro Gómez Sierra.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A server that routes user questions to specialized agents (date, location, weather) or an LLM expert, with a simple Streamlit web interface for easy interaction.
Related MCP Servers
- -securityAlicense-qualityThis is a server that lets your LLMs (like Claude) talk directly to your BigQuery data! Think of it as a friendly translator that sits between your AI assistant and your database, making sure they can chat securely and efficiently.Last updated -124181JavaScriptMIT License
- AsecurityFlicenseAqualityA server that implements the Model Context Protocol to connect LLMs to Brightsy AI agents, allowing users to pass messages to and receive responses from these agents.Last updated -196JavaScript
- -securityFlicense-qualityA server that manages conversation context for LLM interactions, storing recent prompts and providing relevant context for each user via REST API endpoints.Last updated -839TypeScript
- AsecurityAlicenseAqualityAn AI router that connects applications to multiple LLM providers (OpenAI, Anthropic, Google, DeepSeek, Ollama, etc.) with smart model orchestration capabilities, enabling dynamic switching between models for different reasoning tasks.Last updated -31011TypeScriptMIT License