mcp-llm

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
LLM_MIN_PNoMin-p parameter for the model (e.g., 0.05)
LLM_TOP_KNoTop-k parameter for the model (e.g., 40)
LLM_TOP_PNoTop-p parameter for the model (e.g., 0.85)
LLM_NUM_CTXNoContext window size (e.g., 16384)
LLM_BASE_URLNoBase URL for the model provider (e.g., https://ollama.internal, http://my-openai-compatible-server.com:3000/v1)
LLM_TIMEOUT_SNoTimeout in seconds for LLM requests (e.g., 240 for 4 minutes)240
LLM_MODEL_NAMEYesThe name of the model to use (e.g., qwen2-32b:q6_k, anthropic.claude-3-7-sonnet-20250219-v1:0)
OPENAI_API_KEYNoAPI key for OpenAI (required when using OpenAI provider)
LLM_TEMPERATURENoTemperature parameter for the model (e.g., 0.2)
LLM_MODEL_PROVIDERYesThe model provider (e.g., bedrock, ollama, openai, openai-compatible)
LLM_ALLOW_FILE_WRITENoSet to true to allow the generate_code_to_file tool to write to filesfalse
LLM_REPETITION_PENALTYNoRepetition penalty parameter for the model (e.g., 1.05)
LLM_SYSTEM_PROMPT_ASK_QUESTIONNoSystem prompt for the ask_question tool
LLM_SYSTEM_PROMPT_GENERATE_CODENoSystem prompt for the generate_code tool
LLM_SYSTEM_PROMPT_GENERATE_DOCUMENTATIONNoSystem prompt for the generate_documentation tool

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
generate_code

Generate code based on a description

generate_code_to_file

Generate code and write it directly to a file at a specific line number

generate_documentation

Generate documentation for code

ask_question

Ask a question to the LLM