Works with any OpenAI-compatible LLM service, providing a bridge to access local LLM models through the standard OpenAI API format
FastMCP - Model Context Protocol Server
FastMCP is a Model Context Protocol (MCP) server that provides LLM services through the MCP standard. It acts as a bridge between MCP clients and your local LLM service, enabling seamless integration with MCP-compatible applications.
Features
- 🚀 MCP Protocol Compliance: Full implementation of Model Context Protocol
- 🔧 Tools: Chat completion, model listing, health checks
- 📝 Prompts: Pre-built prompts for common tasks (assistant, code review, summarization)
- 📊 Resources: Server configuration and LLM service status
- 🔄 Streaming Support: Both streaming and non-streaming responses
- 🔒 Configurable: Environment-based configuration
- 🛡️ Robust: Built-in error handling and health monitoring
- 🔌 Integration Ready: Works with any OpenAI-compatible LLM service
Getting Started
Prerequisites
- Python 3.9+
- pip
- Local LLM service running on port 5001 (OpenAI-compatible API)
- MCP client (e.g., Claude Desktop, MCP Inspector)
Installation
- Clone the repository:
- Create a virtual environment and activate it:
- Install dependencies:
- Create a
.env
file (copy from.env.mcp
) and configure:
Running the MCP Server
Option 1: Using the CLI script
Option 2: Direct execution
Option 3: With custom configuration
The MCP server will run on stdio and can be connected to by MCP clients.
MCP Client Integration
Claude Desktop Integration
Add to your Claude Desktop configuration:
MCP Inspector
Test your server with MCP Inspector:
Available Tools
1. Chat Completion
Send messages to your LLM service:
2. List Models
Get available models from your LLM service:
3. Health Check
Check if your LLM service is running:
Available Prompts
- chat_assistant: General AI assistant prompt
- code_review: Code review and analysis
- summarize: Text summarization
Available Resources
- config://server: Server configuration
- status://llm-service: LLM service status
Project Structure
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
A Model Context Protocol server that bridges MCP clients with local LLM services, enabling seamless integration with MCP-compatible applications through standard tools like chat completion, model listing, and health checks.
Related MCP Servers
- AsecurityAlicenseAqualityAn educational implementation of a Model Context Protocol server that demonstrates how to build a functional MCP server for integrating with various LLM clients like Claude Desktop.Last updated -1120PythonMIT License
- -securityAlicense-qualityA Model Context Protocol (MCP) server implementation that enables LLMs to interact with the Osmosis protocol, allowing for querying and transaction functionality through natural language.Last updated -9TypeScriptMIT License
- AsecurityAlicenseAqualityA Model Context Protocol (MCP) server designed to easily dump your codebase context into Large Language Models (LLMs).Last updated -191JavaScriptApache 2.0
- -securityFlicense-qualityA ready-to-use starter implementation of the Model Context Protocol (MCP) server that enables applications to provide standardized context for LLMs with sample resources, tools, and prompts.Last updated -TypeScript