Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@LibreModel MCP Serverchat with temperature 0.7: explain quantum computing basics"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
LibreModel MCP Server π€
A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.
Features
π¬ Full conversation support with Local Model through Claude Desktop
ποΈ Complete parameter control (temperature, max_tokens, top_p, top_k)
β Health monitoring and server status checks
π§ͺ Built-in testing tools for different capabilities
π Performance metrics and token usage tracking
π§ Easy configuration via environment variables
Related MCP server: Claude-LMStudio Bridge
Quick Start
npm install @openconstruct/llama-mcp-serverA Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.
Features
π¬ Full conversation support with LibreModel through Claude Desktop
ποΈ Complete parameter control (temperature, max_tokens, top_p, top_k)
β Health monitoring and server status checks
π§ͺ Built-in testing tools for different capabilities
π Performance metrics and token usage tracking
π§ Easy configuration via environment variables
Quick Start
1. Install Dependencies
cd llama-mcp
npm install2. Build the Server
npm run build3. Start Your LibreModel
Make sure llama-server is running with your model:
./llama-server -m lm37.gguf -c 2048 --port 80804. Configure Claude Desktop
Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"libremodel": {
"command": "node",
"args": ["/home/jerr/llama-mcp/dist/index.js"]
}
}
}5. Restart Claude Desktop
Claude will now have access to LibreModel through MCP!
Usage
Once configured, you can use these tools in Claude Desktop:
π¬ chat - Main conversation tool
Use the chat tool to ask LibreModel: "What is your name and what can you do?"π§ͺ quick_test - Test LibreModel capabilities
Run a quick_test with type "creative" to see if LibreModel can write poetryπ₯ health_check - Monitor server status
Use health_check to see if LibreModel is running properlyConfiguration
Set environment variables to customize behavior:
export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URLAvailable Tools
Tool | Description | Parameters |
| Converse with MOdel |
|
| Run predefined capability tests |
|
| Check server health and status | None |
Resources
Configuration: View current server settings
Instructions: Detailed usage guide and setup instructions
Development
# Install dependencies
npm install # LibreModel MCP Server π€
A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.
## Features
- π¬ **Full conversation support** with LibreModel through Claude Desktop
- ποΈ **Complete parameter control** (temperature, max_tokens, top_p, top_k)
- β
**Health monitoring** and server status checks
- π§ͺ **Built-in testing tools** for different capabilities
- π **Performance metrics** and token usage tracking
- π§ **Easy configuration** via environment variables
## Quick Start
### 1. Install Dependencies
```bash
cd llama-mcp
npm install2. Build the Server
npm run build3. Start Your LibreModel
Make sure llama-server is running with your model:
./llama-server -m lm37.gguf -c 2048 --port 80804. Configure Claude Desktop
Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"libremodel": {
"command": "node",
"args": ["/home/jerr/llama-mcp/dist/index.js"]
}
}
}5. Restart Claude Desktop
Claude will now have access to LibreModel through MCP!
Usage
Once configured, you can use these tools in Claude Desktop:
π¬ chat - Main conversation tool
Use the chat tool to ask LibreModel: "What is your name and what can you do?"π§ͺ quick_test - Test LibreModel capabilities
Run a quick_test with type "creative" to see if LibreModel can write poetryπ₯ health_check - Monitor server status
Use health_check to see if LibreModel is running properlyConfiguration
Set environment variables to customize behavior:
export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URLAvailable Tools
Tool | Description | Parameters |
| Converse with MOdel |
|
| Run predefined capability tests |
|
| Check server health and status | None |
Resources
Configuration: View current server settings
Instructions: Detailed usage guide and setup instructions
Development
# Install dependencies
npm install openconstruct/llama-mcp-server
# Development mode (auto-rebuild)
npm run dev
# Build for production
npm run build
# Start the server directly
npm startArchitecture
Claude Desktop ββ LLama MCP Server ββ llama-server API ββ Local ModelThe MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.
Troubleshooting
"Cannot reach LLama server"
Ensure llama-server is running on the configured port
Check that the model is loaded and responding
Verify firewall/network settings
"Tool not found in Claude Desktop"
Restart Claude Desktop after configuration changes
Check that the path to
index.jsis correct and absoluteVerify the MCP server builds without errors
Poor response quality
Adjust temperature and sampling parameters
Try different system prompts
License
CC0-1.0 - Public Domain. Use freely!
Built with β€οΈ for open-source AI and the LibreModel project. by Claude Sonnet4
Development mode (auto-rebuild)
npm run dev
Build for production
npm run build
Start the server directly
npm start
## Architecture
Claude Desktop ββ LLama MCP Server ββ llama-server API ββ Local Model
The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.
## Troubleshooting
**"Cannot reach LLama server"**
- Ensure llama-server is running on the configured port
- Check that the model is loaded and responding
- Verify firewall/network settings
**"Tool not found in Claude Desktop"**
- Restart Claude Desktop after configuration changes
- Check that the path to `index.js` is correct and absolute
- Verify the MCP server builds without errors
**Poor response quality**
- Adjust temperature and sampling parameters
- Try different system prompts
## License
CC0-1.0 - Public Domain. Use freely!
---
Built with β€οΈ for open-source AI and the LibreModel project. by Claude Sonnet4
### 1. Install Dependencies
```bash
cd llama-mcp
npm install2. Build the Server
npm run build3. Start Your LibreModel
Make sure llama-server is running with your model:
./llama-server -m lm37.gguf -c 2048 --port 80804. Configure Claude Desktop
Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"libremodel": {
"command": "node",
"args": ["/home/jerr/llama-mcp/dist/index.js"]
}
}
}5. Restart Claude Desktop
Claude will now have access to LibreModel through MCP!
Usage
Once configured, you can use these tools in Claude Desktop:
π¬ chat - Main conversation tool
Use the chat tool to ask LibreModel: "What is your name and what can you do?"π§ͺ quick_test - Test LibreModel capabilities
Run a quick_test with type "creative" to see if LibreModel can write poetryπ₯ health_check - Monitor server status
Use health_check to see if LibreModel is running properlyConfiguration
Set environment variables to customize behavior:
export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URLAvailable Tools
Tool | Description | Parameters |
| Converse with MOdel |
|
| Run predefined capability tests |
|
| Check server health and status | None |
Resources
Configuration: View current server settings
Instructions: Detailed usage guide and setup instructions
Development
# Install dependencies
npm install
# Development mode (auto-rebuild)
npm run dev
# Build for production
npm run build
# Start the server directly
npm startArchitecture
Claude Desktop ββ LLama MCP Server ββ llama-server API ββ Local ModelThe MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.
Troubleshooting
"Cannot reach LLama server"
Ensure llama-server is running on the configured port
Check that the model is loaded and responding
Verify firewall/network settings
"Tool not found in Claude Desktop"
Restart Claude Desktop after configuration changes
Check that the path to
index.jsis correct and absoluteVerify the MCP server builds without errors
Poor response quality
Adjust temperature and sampling parameters
Try different system prompts
License
CC0-1.0 - Public Domain. Use freely!
Built with β€οΈ for open-source AI and the LibreModel project. by Claude Sonnet4