Supports environment-aware configuration through .env files, allowing customization of server behavior based on development or production environments.
Built on FastAPI framework, providing performant REST endpoints for context retrieval, model listing, and health checking.
Enables querying GraphQL endpoints as a data source, allowing the MCP server to extract contextual information from GraphQL APIs.
Integrates with Ollama's open source LLM models (Mistral, Qwen, Llama2), enabling the MCP server to analyze queries and retrieve relevant contextual information for these models.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Model Context Protocol Serverfind context about iPhone 15 specifications"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Model Context Protocol Server
A FastAPI-based server that implements the Model Context Protocol to provide relevant information to AI models when processing user queries.
Environment Configuration
The server can be configured using environment variables. Create a .env file in the root directory with the following variables:
# Server Configuration
HOST=0.0.0.0 # Server host (0.0.0.0 for all interfaces)
PORT=8000 # Server port
ENVIRONMENT=development # Environment (development/production)
DEBUG=true # Enable debug mode
API_PREFIX=/api/v1 # API prefix for all endpoints
# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434 # Ollama server URL
OLLAMA_MODEL=llama2 # Default model to use
# Database Configuration
DATABASE_URL=sqlite:///./catalog.db # Database connection URLRelated MCP server: Playwright MCP Server
Deployment
Local Development
Create and activate a virtual environment:
python -m venv .venv-py311
source .venv-py311/bin/activate # On Unix/macOS
# or
.venv-py311\Scripts\activate # On WindowsInstall dependencies:
pip install -r requirements.txtRun the server:
uvicorn main:app --reloadProduction Deployment
Set up environment variables for production:
HOST=0.0.0.0
PORT=8000
ENVIRONMENT=production
DEBUG=false
API_PREFIX=/api/v1
OLLAMA_BASE_URL=http://your-ollama-server:11434
OLLAMA_MODEL=llama2Run the server:
uvicorn main:app --host 0.0.0.0 --port 8000Docker Deployment
Build the Docker image:
docker build -t mcp-server .Run the container:
docker run -p 8000:8000 \
-e HOST=0.0.0.0 \
-e PORT=8000 \
-e ENVIRONMENT=production \
-e DEBUG=false \
-e OLLAMA_BASE_URL=http://your-ollama-server:11434 \
-e OLLAMA_MODEL=llama2 \
mcp-serverAPI Documentation
When running in development mode (DEBUG=true), API documentation is available at:
Swagger UI:
http://your-server:8000/api/v1/docsReDoc:
http://your-server:8000/api/v1/redocOpenAPI JSON:
http://your-server:8000/api/v1/openapi.json
Security Considerations
In production:
Set DEBUG=false to disable API documentation
Use HTTPS
Configure proper authentication
Use secure database credentials
Set appropriate CORS policies
For Ollama server:
Ensure Ollama server is properly secured
Use internal network for communication if possible
Consider using API keys or other authentication methods
Monitoring and Logging
The server includes built-in logging with different levels based on the environment:
Development: Debug level logging
Production: Info level logging
Logs can be configured to output to files or external logging services.
Features
Intelligent query routing based on query analysis
Support for multiple data sources (Database, GraphQL, REST)
Integration with Ollama models (Mistral, Qwen, Llama2)
Environment-aware configuration (Development/Production)
Comprehensive logging and error handling
Health check endpoints
Mock data support for development
Prerequisites
Python 3.8+
Ollama installed and running locally
Required Ollama models:
mistral
qwen
llama2
Installation
Clone the repository:
git clone <repository-url>
cd mcp-serverCreate and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtCreate a
.envfile:
cp .env.example .envUpdate the
.envfile with your configuration:
ENVIRONMENT=development
OLLAMA_MODEL=mistral
OLLAMA_BASE_URL=http://localhost:11434Running the Server
Start Ollama (if not already running):
ollama serveStart the MCP server:
python main.pyThe server will be available at http://localhost:8000
API Endpoints
Get Context
curl -X POST http://localhost:8000/context \
-H "Content-Type: application/json" \
-d '{
"query": "Tell me about iPhone 15",
"model": "mistral"
}'List Available Models
curl http://localhost:8000/modelsHealth Check
curl http://localhost:8000/healthProject Structure
mcp-server/
├── context_providers/ # Data source providers
│ ├── database.py # Database provider
│ ├── graphql.py # GraphQL provider
│ ├── rest.py # REST API provider
│ └── provider_factory.py
├── model_providers/ # AI model providers
│ ├── base.py # Base model provider
│ ├── ollama.py # Ollama integration
│ └── provider_factory.py
├── main.py # FastAPI application
├── query_analyzer.py # Query analysis logic
├── logger_config.py # Logging configuration
├── requirements.txt # Project dependencies
└── README.md # Project documentationDevelopment
Adding New Providers
Create a new provider class in the appropriate directory
Implement the required interface methods
Register the provider in the factory
Adding New Models
Add the model to the
AVAILABLE_MODELSdictionary inmodel_providers/ollama.pyUpdate the model validation logic if needed
Contributing
Fork the repository
Create a feature branch
Commit your changes
Push to the branch
Create a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.