Supports environment-aware configuration through .env files, allowing customization of server behavior based on development or production environments.
Built on FastAPI framework, providing performant REST endpoints for context retrieval, model listing, and health checking.
Enables querying GraphQL endpoints as a data source, allowing the MCP server to extract contextual information from GraphQL APIs.
Integrates with Ollama's open source LLM models (Mistral, Qwen, Llama2), enabling the MCP server to analyze queries and retrieve relevant contextual information for these models.
Model Context Protocol Server
A FastAPI-based server that implements the Model Context Protocol to provide relevant information to AI models when processing user queries.
Environment Configuration
The server can be configured using environment variables. Create a .env
file in the root directory with the following variables:
Deployment
Local Development
Create and activate a virtual environment:
Install dependencies:
Run the server:
Production Deployment
Set up environment variables for production:
Run the server:
Docker Deployment
Build the Docker image:
Run the container:
API Documentation
When running in development mode (DEBUG=true), API documentation is available at:
Swagger UI:
http://your-server:8000/api/v1/docs
ReDoc:
http://your-server:8000/api/v1/redoc
OpenAPI JSON:
http://your-server:8000/api/v1/openapi.json
Security Considerations
In production:
Set DEBUG=false to disable API documentation
Use HTTPS
Configure proper authentication
Use secure database credentials
Set appropriate CORS policies
For Ollama server:
Ensure Ollama server is properly secured
Use internal network for communication if possible
Consider using API keys or other authentication methods
Monitoring and Logging
The server includes built-in logging with different levels based on the environment:
Development: Debug level logging
Production: Info level logging
Logs can be configured to output to files or external logging services.
Features
Intelligent query routing based on query analysis
Support for multiple data sources (Database, GraphQL, REST)
Integration with Ollama models (Mistral, Qwen, Llama2)
Environment-aware configuration (Development/Production)
Comprehensive logging and error handling
Health check endpoints
Mock data support for development
Prerequisites
Python 3.8+
Ollama installed and running locally
Required Ollama models:
mistral
qwen
llama2
Installation
Clone the repository:
Create and activate a virtual environment:
Install dependencies:
Create a
.env
file:
Update the
.env
file with your configuration:
Running the Server
Start Ollama (if not already running):
Start the MCP server:
The server will be available at http://localhost:8000
API Endpoints
Get Context
List Available Models
Health Check
Project Structure
Development
Adding New Providers
Create a new provider class in the appropriate directory
Implement the required interface methods
Register the provider in the factory
Adding New Models
Add the model to the
AVAILABLE_MODELS
dictionary inmodel_providers/ollama.py
Update the model validation logic if needed
Contributing
Fork the repository
Create a feature branch
Commit your changes
Push to the branch
Create a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
A middleware server that intelligently routes AI model queries to appropriate data sources, providing contextual information to enhance AI responses.
Related MCP Servers
- -securityFlicense-qualityA server that provides rich UI context and interaction capabilities to AI models, enabling deep understanding of user interfaces through visual analysis and precise interaction via Model Context Protocol.Last updated -60
- -securityFlicense-qualityA server that enables AI systems to browse, retrieve content from, and interact with web pages through the Model Context Protocol.Last updated -
- -securityFlicense-qualityHigh-performance server enabling AI assistants to access web scraping, crawling, and deep research capabilities through Model Context Protocol.Last updated -19
- -securityAlicense-qualityA middleware system that connects large language models (LLMs) with various tool services through an OpenAI-compatible API, enabling enhanced AI assistant capabilities with features like file operations, web browsing, and database management.Last updated -3MIT License