The Kobold MCP Server enables integration with KoboldAI and provides both MCP and OpenAI-compatible APIs for various AI tasks:
Text Generation: Generate text with customizable parameters (temperature, max_length, stop_sequence)
Chat Completion: Engage in conversations with persistent memory
Image Generation: Create images from text (txt2img) or transform existing images (img2img)
Image Captioning: Generate descriptions for images using interrogator models
Audio Processing: Transcribe audio with Whisper and convert text to speech
Web Search: Perform searches via DuckDuckGo
Token Operations: Count tokens, convert token IDs to text, retrieve log probabilities
System Information: Access model details, version info, and performance metrics
Utility Functions: Abort ongoing generations, check context settings
Kobold MCP Server
A Model Context Protocol (MCP) server implementation for interfacing with KoboldAI. This server enables integration between KoboldAI's text generation capabilities and MCP-compatible applications.
Features
Text generation with KoboldAI
Chat completion with persistent memory
OpenAI-compatible API endpoints
Stable Diffusion integration
Built on the official MCP SDK
TypeScript implementation
Related MCP server: Outsource MCP
Installation
Prerequisites
Node.js (v16 or higher)
npm or yarn package manager
Running KoboldAI instance
Usage
Configuration
The server can be configured through environment variables or a configuration object:
Supported APIs
Core KoboldAI API (text generation, model info)
Chat completion with conversation memory
Text completion (OpenAI-compatible)
Stable Diffusion integration (txt2img, img2img)
Audio transcription and text-to-speech
Web search capabilities
Development
Clone the repository:
Install dependencies:
Build the project:
Dependencies
@modelcontextprotocol/sdk: ^1.0.1node-fetch: ^2.6.1zod: ^3.20.0zod-to-json-schema: ^3.23.5
Contributing
Contributions welcome! Please feel free to submit a Pull Request.
License
MIT License - see LICENSE file for details.
Support
For issues and feature requests, please use the GitHub issue tracker.