Provides access to locally running LLM models via LM Studio's OpenAI-compatible API endpoints, enabling text generation with custom parameters like temperature and token limits.
LMStudio-MCP
A Model Control Protocol (MCP) server that allows Claude to communicate with locally running LLM models via LM Studio.
Overview
LMStudio-MCP creates a bridge between Claude (with MCP capabilities) and your locally running LM Studio instance. This allows Claude to:
Check the health of your LM Studio API
List available models
Get the currently loaded model
Generate completions using your local models
This enables you to leverage your own locally running models through Claude's interface, combining Claude's capabilities with your private models.
Related MCP server: Ollama MCP Server
Prerequisites
Python 3.7+
LM Studio installed and running locally with a model loaded
Claude with MCP access
Required Python packages (see Installation)
🚀 Quick Installation
One-Line Install (Recommended)
Manual Installation Methods
1. Local Python Installation
2. Docker Installation
3. Docker Compose
For detailed deployment instructions, see DOCKER.md.
MCP Configuration
Quick Setup
Using GitHub directly (simplest):
Using local installation:
Using Docker:
For complete MCP configuration instructions, see MCP_CONFIGURATION.md.
Usage
Start LM Studio and ensure it's running on port 1234 (the default)
Load a model in LM Studio
Configure Claude MCP with one of the configurations above
Connect to the MCP server in Claude when prompted
Available Functions
The bridge provides the following functions:
health_check(): Verify if LM Studio API is accessiblelist_models(): Get a list of all available models in LM Studioget_current_model(): Identify which model is currently loadedchat_completion(prompt, system_prompt, temperature, max_tokens): Generate text from your local model
Deployment Options
This project supports multiple deployment methods:
Method | Use Case | Pros | Cons |
Local Python | Development, simple setup | Fast, direct control | Requires Python setup |
Docker | Isolated environments | Clean, portable | Requires Docker |
Docker Compose | Production deployments | Easy management | More complex setup |
Kubernetes | Enterprise/scale | Highly scalable | Complex configuration |
GitHub Direct | Zero setup | No local install needed | Requires internet |
Known Limitations
Some models (e.g., phi-3.5-mini-instruct_uncensored) may have compatibility issues
The bridge currently uses only the OpenAI-compatible API endpoints of LM Studio
Model responses will be limited by the capabilities of your locally loaded model
Troubleshooting
API Connection Issues
If Claude reports 404 errors when trying to connect to LM Studio:
Ensure LM Studio is running and has a model loaded
Check that LM Studio's server is running on port 1234
Verify your firewall isn't blocking the connection
Try using "127.0.0.1" instead of "localhost" in the API URL if issues persist
Model Compatibility
If certain models don't work correctly:
Some models might not fully support the OpenAI chat completions API format
Try different parameter values (temperature, max_tokens) for problematic models
Consider switching to a more compatible model if problems persist
For detailed troubleshooting help, see TROUBLESHOOTING.md.
🐳 Docker & Containerization
This project includes comprehensive Docker support:
Multi-architecture images (AMD64, ARM64/Apple Silicon)
Automated builds via GitHub Actions
Pre-built images available on GitHub Container Registry
Docker Compose for easy deployment
Kubernetes manifests for production deployments
See DOCKER.md for complete containerization documentation.
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
License
MIT
Acknowledgements
This project was originally developed as "Claude-LMStudio-Bridge_V2" and has been renamed and open-sourced as "LMStudio-MCP".
🌟 If this project helps you, please consider giving it a star!