README.mdā¢4.56 kB
# FastMCP - Model Context Protocol Server
FastMCP is a Model Context Protocol (MCP) server that provides LLM services through the MCP standard. It acts as a bridge between MCP clients and your local LLM service, enabling seamless integration with MCP-compatible applications.
## Features
- š **MCP Protocol Compliance**: Full implementation of Model Context Protocol
- š§ **Tools**: Chat completion, model listing, health checks
- š **Prompts**: Pre-built prompts for common tasks (assistant, code review, summarization)
- š **Resources**: Server configuration and LLM service status
- š **Streaming Support**: Both streaming and non-streaming responses
- š **Configurable**: Environment-based configuration
- š”ļø **Robust**: Built-in error handling and health monitoring
- š **Integration Ready**: Works with any OpenAI-compatible LLM service
## Getting Started
### Prerequisites
- Python 3.9+
- pip
- Local LLM service running on port 5001 (OpenAI-compatible API)
- MCP client (e.g., Claude Desktop, MCP Inspector)
### Installation
1. Clone the repository:
```bash
git clone https://github.com/yourusername/fastmcp.git
cd fastmcp
```
2. Create a virtual environment and activate it:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
3. Install dependencies:
```bash
pip install -r requirements.txt
```
4. Create a `.env` file (copy from `.env.mcp`) and configure:
```env
# Server Settings
MCP_SERVER_NAME=fastmcp-llm-router
MCP_SERVER_VERSION=0.1.0
# LLM Service Configuration
LOCAL_LLM_SERVICE_URL=http://localhost:5001
# Optional: API Key for LLM service
# LLM_SERVICE_API_KEY=your_api_key_here
# Timeouts (in seconds)
LLM_REQUEST_TIMEOUT=60
HEALTH_CHECK_TIMEOUT=10
# Logging
LOG_LEVEL=INFO
```
### Running the MCP Server
#### Option 1: Using the CLI script
```bash
python run_server.py
```
#### Option 2: Direct execution
```bash
python mcp_server.py
```
#### Option 3: With custom configuration
```bash
python run_server.py --llm-url http://localhost:5001 --log-level DEBUG
```
The MCP server will run on stdio and can be connected to by MCP clients.
## MCP Client Integration
### Claude Desktop Integration
Add to your Claude Desktop configuration:
```json
{
"mcpServers": {
"fastmcp-llm-router": {
"command": "python",
"args": ["/path/to/fastmcp/mcp_server.py"],
"env": {
"LOCAL_LLM_SERVICE_URL": "http://localhost:5001"
}
}
}
}
```
### MCP Inspector
Test your server with MCP Inspector:
```bash
npx @modelcontextprotocol/inspector python mcp_server.py
```
## Available Tools
### 1. Chat Completion
Send messages to your LLM service:
```json
{
"name": "chat_completion",
"arguments": {
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"model": "default",
"temperature": 0.7
}
}
```
### 2. List Models
Get available models from your LLM service:
```json
{
"name": "list_models",
"arguments": {}
}
```
### 3. Health Check
Check if your LLM service is running:
```json
{
"name": "health_check",
"arguments": {}
}
```
## Available Prompts
- **chat_assistant**: General AI assistant prompt
- **code_review**: Code review and analysis
- **summarize**: Text summarization
## Available Resources
- **config://server**: Server configuration
- **status://llm-service**: LLM service status
## Project Structure
```
fastmcp/
āāā app/
ā āāā api/
ā ā āāā v1/
ā ā āāā api.py # API routes
ā āāā core/
ā ā āāā config.py # Application configuration
ā āāā models/ # Database models
ā āāā services/ # Business logic
ā āāā utils/ # Utility functions
āāā tests/ # Test files
āāā .env.example # Example environment variables
āāā requirements.txt # Project dependencies
āāā README.md # This file
```
## Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.