Skip to main content
Glama

LibreModel MCP Server

LibreModel MCP Server 🤖

A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.

Features

  • 💬 Full conversation support with Local Model through Claude Desktop
  • 🎛️ Complete parameter control (temperature, max_tokens, top_p, top_k)
  • Health monitoring and server status checks
  • 🧪 Built-in testing tools for different capabilities
  • 📊 Performance metrics and token usage tracking
  • 🔧 Easy configuration via environment variables

Quick Start

npm install @openconstruct/llama-mcp-server

A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.

Features

  • 💬 Full conversation support with LibreModel through Claude Desktop
  • 🎛️ Complete parameter control (temperature, max_tokens, top_p, top_k)
  • Health monitoring and server status checks
  • 🧪 Built-in testing tools for different capabilities
  • 📊 Performance metrics and token usage tracking
  • 🔧 Easy configuration via environment variables

Quick Start

1. Install Dependencies

cd llama-mcp npm install

2. Build the Server

npm run build

3. Start Your LibreModel

Make sure llama-server is running with your model:

./llama-server -m lm37.gguf -c 2048 --port 8080

4. Configure Claude Desktop

Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):

{ "mcpServers": { "libremodel": { "command": "node", "args": ["/home/jerr/llama-mcp/dist/index.js"] } } }

5. Restart Claude Desktop

Claude will now have access to LibreModel through MCP!

Usage

Once configured, you can use these tools in Claude Desktop:

💬 chat - Main conversation tool

Use the chat tool to ask LibreModel: "What is your name and what can you do?"

🧪 quick_test - Test LibreModel capabilities

Run a quick_test with type "creative" to see if LibreModel can write poetry

🏥 health_check - Monitor server status

Use health_check to see if LibreModel is running properly

Configuration

Set environment variables to customize behavior:

export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URL

Available Tools

ToolDescriptionParameters
chatConverse with MOdelmessage, temperature, max_tokens, top_p, top_k, system_prompt
quick_testRun predefined capability teststest_type (hello/math/creative/knowledge)
health_checkCheck server health and statusNone

Resources

  • Configuration: View current server settings
  • Instructions: Detailed usage guide and setup instructions

Development

# Install dependencies npm install # LibreModel MCP Server 🤖 A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server. ## Features - 💬 **Full conversation support** with LibreModel through Claude Desktop - 🎛️ **Complete parameter control** (temperature, max_tokens, top_p, top_k) - ✅ **Health monitoring** and server status checks - 🧪 **Built-in testing tools** for different capabilities - 📊 **Performance metrics** and token usage tracking - 🔧 **Easy configuration** via environment variables ## Quick Start ### 1. Install Dependencies ```bash cd llama-mcp npm install

2. Build the Server

npm run build

3. Start Your LibreModel

Make sure llama-server is running with your model:

./llama-server -m lm37.gguf -c 2048 --port 8080

4. Configure Claude Desktop

Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):

{ "mcpServers": { "libremodel": { "command": "node", "args": ["/home/jerr/llama-mcp/dist/index.js"] } } }

5. Restart Claude Desktop

Claude will now have access to LibreModel through MCP!

Usage

Once configured, you can use these tools in Claude Desktop:

💬 chat - Main conversation tool

Use the chat tool to ask LibreModel: "What is your name and what can you do?"

🧪 quick_test - Test LibreModel capabilities

Run a quick_test with type "creative" to see if LibreModel can write poetry

🏥 health_check - Monitor server status

Use health_check to see if LibreModel is running properly

Configuration

Set environment variables to customize behavior:

export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URL

Available Tools

ToolDescriptionParameters
chatConverse with MOdelmessage, temperature, max_tokens, top_p, top_k, system_prompt
quick_testRun predefined capability teststest_type (hello/math/creative/knowledge)
health_checkCheck server health and statusNone

Resources

  • Configuration: View current server settings
  • Instructions: Detailed usage guide and setup instructions

Development

# Install dependencies npm install openconstruct/llama-mcp-server # Development mode (auto-rebuild) npm run dev # Build for production npm run build # Start the server directly npm start

Architecture

Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model

The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.

Troubleshooting

"Cannot reach LLama server"

  • Ensure llama-server is running on the configured port
  • Check that the model is loaded and responding
  • Verify firewall/network settings

"Tool not found in Claude Desktop"

  • Restart Claude Desktop after configuration changes
  • Check that the path to index.js is correct and absolute
  • Verify the MCP server builds without errors

Poor response quality

  • Adjust temperature and sampling parameters
  • Try different system prompts

License

CC0-1.0 - Public Domain. Use freely!


Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4

Development mode (auto-rebuild)

npm run dev

Build for production

npm run build

Start the server directly

npm start

## Architecture

Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model

The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop. ## Troubleshooting **"Cannot reach LLama server"** - Ensure llama-server is running on the configured port - Check that the model is loaded and responding - Verify firewall/network settings **"Tool not found in Claude Desktop"** - Restart Claude Desktop after configuration changes - Check that the path to `index.js` is correct and absolute - Verify the MCP server builds without errors **Poor response quality** - Adjust temperature and sampling parameters - Try different system prompts ## License CC0-1.0 - Public Domain. Use freely! --- Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4 ### 1. Install Dependencies ```bash cd llama-mcp npm install

2. Build the Server

npm run build

3. Start Your LibreModel

Make sure llama-server is running with your model:

./llama-server -m lm37.gguf -c 2048 --port 8080

4. Configure Claude Desktop

Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):

{ "mcpServers": { "libremodel": { "command": "node", "args": ["/home/jerr/llama-mcp/dist/index.js"] } } }

5. Restart Claude Desktop

Claude will now have access to LibreModel through MCP!

Usage

Once configured, you can use these tools in Claude Desktop:

💬 chat - Main conversation tool

Use the chat tool to ask LibreModel: "What is your name and what can you do?"

🧪 quick_test - Test LibreModel capabilities

Run a quick_test with type "creative" to see if LibreModel can write poetry

🏥 health_check - Monitor server status

Use health_check to see if LibreModel is running properly

Configuration

Set environment variables to customize behavior:

export LLAMA_SERVER_URL="http://localhost:8080" # Default llama-server URL

Available Tools

ToolDescriptionParameters
chatConverse with MOdelmessage, temperature, max_tokens, top_p, top_k, system_prompt
quick_testRun predefined capability teststest_type (hello/math/creative/knowledge)
health_checkCheck server health and statusNone

Resources

  • Configuration: View current server settings
  • Instructions: Detailed usage guide and setup instructions

Development

# Install dependencies npm install # Development mode (auto-rebuild) npm run dev # Build for production npm run build # Start the server directly npm start

Architecture

Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model

The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.

Troubleshooting

"Cannot reach LLama server"

  • Ensure llama-server is running on the configured port
  • Check that the model is loaded and responding
  • Verify firewall/network settings

"Tool not found in Claude Desktop"

  • Restart Claude Desktop after configuration changes
  • Check that the path to index.js is correct and absolute
  • Verify the MCP server builds without errors

Poor response quality

  • Adjust temperature and sampling parameters
  • Try different system prompts

License

CC0-1.0 - Public Domain. Use freely!


Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4

Install Server
-
security - not tested
F
license - not found
-
quality - not tested

local-only server

The server can only run on the client's local machine because it depends on local resources.

Bridges Claude Desktop with local LLM instances running via llama-server, enabling full conversation support with complete parameter control and health monitoring. Allows users to chat with their local models directly through Claude Desktop with configurable sampling parameters.

  1. Features
    1. Quick Start
      1. Features
        1. Quick Start
          1. 1. Install Dependencies
          2. 2. Build the Server
          3. 3. Start Your LibreModel
          4. 4. Configure Claude Desktop
          5. 5. Restart Claude Desktop
        2. Usage
          1. 💬 chat - Main conversation tool
          2. 🧪 quick_test - Test LibreModel capabilities
          3. 🏥 health_check - Monitor server status
        3. Configuration
          1. Available Tools
            1. Resources
              1. Development
                1. Install dependencies
                  1. Development mode (auto-rebuild)
                    1. Build for production
                      1. Start the server directly
                        1. Architecture
                        2. Troubleshooting
                        3. License
                        4. Usage
                        5. Configuration
                        6. Available Tools
                        7. Resources
                        8. Development
                        9. Architecture
                        10. Troubleshooting
                        11. License

                      Related MCP Servers

                      • -
                        security
                        A
                        license
                        -
                        quality
                        Enables seamless integration between Ollama's local LLM models and MCP-compatible applications, supporting model management and chat interactions.
                        Last updated -
                        94
                        97
                        TypeScript
                        AGPL 3.0
                      • -
                        security
                        A
                        license
                        -
                        quality
                        A custom Model Context Protocol server that gives Claude Desktop and other LLMs access to file system operations and command execution capabilities through standardized tool interfaces.
                        Last updated -
                        24
                        Python
                        Apache 2.0
                        • Apple
                        • Linux
                      • -
                        security
                        F
                        license
                        -
                        quality
                        An MCP server that allows Claude to interact with local LLMs running in LM Studio, providing access to list models, generate text, and use chat completions through local models.
                        Last updated -
                        10
                        Python
                      • -
                        security
                        A
                        license
                        -
                        quality
                        A bridge that allows Claude to communicate with locally running LLM models via LM Studio, enabling users to leverage their private models through Claude's interface.
                        Last updated -
                        103
                        MIT License

                      View all related MCP servers

                      MCP directory API

                      We provide all the information about MCP servers via our MCP API.

                      curl -X GET 'https://glama.ai/api/mcp/v1/servers/openconstruct/llama-mcp-server'

                      If you have feedback or need assistance with the MCP directory API, please join our Discord server