Works with any OpenAI-compatible LLM service, providing a bridge to access local LLM models through the standard OpenAI API format
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@FastMCPsummarize this article about AI advancements"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
FastMCP - Model Context Protocol Server
FastMCP is a Model Context Protocol (MCP) server that provides LLM services through the MCP standard. It acts as a bridge between MCP clients and your local LLM service, enabling seamless integration with MCP-compatible applications.
Features
π MCP Protocol Compliance: Full implementation of Model Context Protocol
π§ Tools: Chat completion, model listing, health checks
π Prompts: Pre-built prompts for common tasks (assistant, code review, summarization)
π Resources: Server configuration and LLM service status
π Streaming Support: Both streaming and non-streaming responses
π Configurable: Environment-based configuration
π‘οΈ Robust: Built-in error handling and health monitoring
π Integration Ready: Works with any OpenAI-compatible LLM service
Related MCP server: Osmosis
Getting Started
Prerequisites
Python 3.9+
pip
Local LLM service running on port 5001 (OpenAI-compatible API)
MCP client (e.g., Claude Desktop, MCP Inspector)
Installation
Clone the repository:
git clone https://github.com/yourusername/fastmcp.git cd fastmcpCreate a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtCreate a
.envfile (copy from.env.mcp) and configure:# Server Settings MCP_SERVER_NAME=fastmcp-llm-router MCP_SERVER_VERSION=0.1.0 # LLM Service Configuration LOCAL_LLM_SERVICE_URL=http://localhost:5001 # Optional: API Key for LLM service # LLM_SERVICE_API_KEY=your_api_key_here # Timeouts (in seconds) LLM_REQUEST_TIMEOUT=60 HEALTH_CHECK_TIMEOUT=10 # Logging LOG_LEVEL=INFO
Running the MCP Server
Option 1: Using the CLI script
python run_server.pyOption 2: Direct execution
python mcp_server.pyOption 3: With custom configuration
python run_server.py --llm-url http://localhost:5001 --log-level DEBUGThe MCP server will run on stdio and can be connected to by MCP clients.
MCP Client Integration
Claude Desktop Integration
Add to your Claude Desktop configuration:
{
"mcpServers": {
"fastmcp-llm-router": {
"command": "python",
"args": ["/path/to/fastmcp/mcp_server.py"],
"env": {
"LOCAL_LLM_SERVICE_URL": "http://localhost:5001"
}
}
}
}MCP Inspector
Test your server with MCP Inspector:
npx @modelcontextprotocol/inspector python mcp_server.pyAvailable Tools
1. Chat Completion
Send messages to your LLM service:
{
"name": "chat_completion",
"arguments": {
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"model": "default",
"temperature": 0.7
}
}2. List Models
Get available models from your LLM service:
{
"name": "list_models",
"arguments": {}
}3. Health Check
Check if your LLM service is running:
{
"name": "health_check",
"arguments": {}
}Available Prompts
chat_assistant: General AI assistant prompt
code_review: Code review and analysis
summarize: Text summarization
Available Resources
config://server: Server configuration
status://llm-service: LLM service status
Project Structure
fastmcp/
βββ app/
β βββ api/
β β βββ v1/
β β βββ api.py # API routes
β βββ core/
β β βββ config.py # Application configuration
β βββ models/ # Database models
β βββ services/ # Business logic
β βββ utils/ # Utility functions
βββ tests/ # Test files
βββ .env.example # Example environment variables
βββ requirements.txt # Project dependencies
βββ README.md # This fileContributing
Fork the repository
Create your feature branch (
git checkout -b feature/amazing-feature)Commit your changes (
git commit -m 'Add some amazing feature')Push to the branch (
git push origin feature/amazing-feature)Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.