Kobold MCP Server
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Capabilities
Server capabilities have not been inspected yet.
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| kobold_max_context_lengthB | Get current max context length setting |
| kobold_max_lengthB | Get current max length setting |
| kobold_generateC | Generate text with KoboldAI |
| kobold_model_infoB | Get current model information |
| kobold_versionB | Get KoboldAI version information |
| kobold_perf_infoC | Get performance information |
| kobold_token_countC | Count tokens in text |
| kobold_detokenizeC | Convert token IDs to text |
| kobold_transcribeC | Transcribe audio using Whisper |
| kobold_web_searchC | Search the web via DuckDuckGo |
| kobold_ttsC | Generate text-to-speech audio |
| kobold_abortB | Abort the currently ongoing generation |
| kobold_last_logprobsB | Get token logprobs from the last request |
| kobold_sd_modelsB | List available Stable Diffusion models |
| kobold_sd_samplersB | List available Stable Diffusion samplers |
| kobold_txt2imgC | Generate image from text prompt |
| kobold_img2imgC | Transform existing image using prompt |
| kobold_interrogateC | Generate caption for image |
| kobold_chatC | Chat completion (OpenAI-compatible) |
| kobold_completeC | Text completion (OpenAI-compatible) |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/PhialsBasement/KoboldCPP-MCP-Server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server