Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Local LLM MCP Serverscan this document for PII and sensitive data locally"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Local LLM MCP Server
A Model Context Protocol (MCP) server that bridges local LLMs running in LM Studio with Claude Desktop and other MCP clients. Keep your sensitive data private by running AI tasks locally while seamlessly integrating with cloud-based AI assistants.
š Features
š Privacy-First Design
Local Processing: All sensitive data stays on your machine
No Cloud Exposure: Private analysis, code review, and content processing happens locally
Privacy Levels: Configurable privacy protection (strict, moderate, minimal)
No Telemetry: Zero usage tracking or data collection
š¤ Dynamic Multi-Model Support
Auto-Discovery: Automatically detects all models loaded in LM Studio
Flexible Selection: Use different models for different tasks
Runtime Switching: Change default models during your session
Per-Request Override: Specify model for individual requests
Smart Initialization: First available model auto-selected as default
š ļø Comprehensive Tool Suite
Local Reasoning - General-purpose AI tasks with complete privacy
Complex problem solving and multi-step reasoning
Question answering and task planning
Context-aware responses
Private Analysis - 7 analysis types for sensitive content
Sentiment Analysis (domain-aware)
Entity Extraction (people, orgs, locations, domain-specific)
Content Classification
Summarization with key points
Privacy Scanning (PII, GDPR compliance)
Security Auditing (vulnerabilities, misconfigurations)
Secure Rewriting - Transform text while maintaining privacy
Style adaptation (formal, casual, professional)
Sensitive information removal
Privacy-preserving transformations
Code Analysis - Local code review and security
Security vulnerability detection
Code quality assessment
Bug detection and optimization suggestions
Template Completion - Intelligent form and document filling
šÆ Domain-Specific Intelligence
Specialized analysis for:
Medical: Healthcare context, HIPAA compliance, clinical terminology
Legal: Legal terminology, regulatory compliance, confidentiality
Financial: Financial regulations, market analysis, data protection
Technical: Software development, engineering contexts
Academic: Scholarly research, methodology, citations
š Quick Start
Prerequisites
Node.js 18+
node --version # Should be >= 18.0.0LM Studio
Download from lmstudio.ai
Load at least one model (e.g., Llama 3.2, Qwen, Mistral)
Start the local server (Server tab ā Start Server)
Default URL:
http://localhost:1234
Installation
Configure Claude Desktop
Edit your Claude Desktop config file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json
Add this configuration:
Important: Use the absolute path to your installation.
Start Using
Restart Claude Desktop - The server starts automatically
Discover Models - Read resource
local://modelsto see available modelsTry It - Ask Claude to use the
local_reasoningtool with a simple prompt
The server automatically:
Discovers all models loaded in LM Studio
Sets the first model as default
Provides full capability documentation via
local://capabilities
Convenience Scripts
For easier server management, use the included scripts:
See SCRIPTS_GUIDE.md for detailed usage.
š Remote Network Access
Access the server from other devices on your home network or connect Claude Desktop remotely!
Connect Claude Desktop Remotely
Quick Start (3 steps):
š Complete Guide: REMOTE_QUICKSTART.md
Remote Access Methods
Method 1: Claude Desktop Custom Connector UI (Production)
For Claude Pro/Max/Team/Enterprise users
Requires valid SSL certificate (not self-signed)
Simple UI-based setup in Settings > Connectors
Guide: CLAUDE_DESKTOP_REMOTE.md
Method 2: mcp-remote Proxy (Development/Testing)
Works with self-signed certificates
Supports localhost and local networks
JSON configuration file
Examples: claude_desktop_config_examples.json
Network Access from Any Client
Available endpoints:
/- Server information/health- Health check/mcp- MCP Streamable HTTP endpoint (GET/POST/DELETE)
See including:
Firewall configuration
Client examples (JavaScript, Python, cURL)
Troubleshooting
Multiple device scenarios
Transport Modes:
Local Mode (stdio): For Claude Desktop integration
HTTP Mode: Unencrypted network access
HTTPS Mode: Encrypted network access with SSL/TLS
Dual Mode: Run stdio + HTTP/HTTPS simultaneously!
See
Specification Compliance
ā MCP Streamable HTTP Transport (Protocol 2025-03-26)
Our implementation uses the latest MCP Streamable HTTP transport:
Protocol version:
2025-03-26Full JSON-RPC 2.0 compliance
Session management via headers
SSE streaming for responses
Stateful mode with session IDs
š Migration Note: SSE transport (2024-11-05) has been replaced with Streamable HTTP per MCP specification. See
š Available Tools
Core Tools
local_reasoning
Use the local LLM for specialized reasoning tasks while keeping data private.
private_analysis
Analyze sensitive content locally without cloud exposure.
secure_rewrite
Rewrite or transform text locally for privacy.
code_analysis
Analyze code locally for security, quality, or documentation.
template_completion
Complete templates or forms using the local LLM.
š Resources
Available Resources
local://models- List of available models in LM Studiolocal://status- Current status of the local LLM serverlocal://config- Server configuration and capabilities
Example Resource Usage
šÆ Prompt Templates
Using Pre-built Templates
Available Templates
Privacy & Security:
privacy-analysis,secure-rewriteCode Analysis:
code-security-review,code-optimizationBusiness:
meeting-summary,email-draft,risk-assessmentResearch:
research-synthesis,literature-reviewContent:
content-adaptation,technical-documentation
āļø Configuration
Model Configuration
Configure different models for different capabilities:
Privacy Settings
Performance Tuning
š Privacy Levels
Strict
Never expose personal names, addresses, phone numbers, emails
Generalize all specific locations and dates
Remove all identifying information
Use placeholders for sensitive data
Moderate
Protect personal identifiable information
Generalize specific details when appropriate
Maintain readability while ensuring privacy
Remove sensitive financial or health data
Minimal
Protect obvious sensitive information (SSNs, credit cards)
Remove personal contact information
Maintain the natural flow of the text
š ļø Development
Project Structure
Building
Adding Custom Tools
Define the tool in
index.ts:
Implement the handler:
š Troubleshooting
Common Issues
LM Studio Connection Issues
Ensure LM Studio is running and the server is started
Check that the base URL matches your LM Studio configuration
Verify the model is loaded and available
Performance Issues
Adjust the
maxConcurrentRequestssettingIncrease
requestTimeoutfor complex requestsConsider using a more powerful local model
Privacy Concerns
Review and adjust privacy level settings
Enable strict privacy mode for sensitive data
Disable logging if handling confidential information
Debug Mode
Set environment variable for verbose logging:
š License
MIT License - see LICENSE file for details.
š¤ Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
š Support
Create an issue for bug reports
Start a discussion for feature requests
Check the documentation for common questions
Note: This server is designed to work with local LLMs for privacy-sensitive tasks. Always review the privacy settings and ensure they meet your requirements before processing confidential data.