mcp-llm
by sammcj
Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
LLM_MIN_P | No | Min-p parameter for the model (e.g., 0.05) | |
LLM_TOP_K | No | Top-k parameter for the model (e.g., 40) | |
LLM_TOP_P | No | Top-p parameter for the model (e.g., 0.85) | |
LLM_NUM_CTX | No | Context window size (e.g., 16384) | |
LLM_BASE_URL | No | Base URL for the model provider (e.g., https://ollama.internal, http://my-openai-compatible-server.com:3000/v1) | |
LLM_TIMEOUT_S | No | Timeout in seconds for LLM requests (e.g., 240 for 4 minutes) | 240 |
LLM_MODEL_NAME | Yes | The name of the model to use (e.g., qwen2-32b:q6_k, anthropic.claude-3-7-sonnet-20250219-v1:0) | |
OPENAI_API_KEY | No | API key for OpenAI (required when using OpenAI provider) | |
LLM_TEMPERATURE | No | Temperature parameter for the model (e.g., 0.2) | |
LLM_MODEL_PROVIDER | Yes | The model provider (e.g., bedrock, ollama, openai, openai-compatible) | |
LLM_ALLOW_FILE_WRITE | No | Set to true to allow the generate_code_to_file tool to write to files | false |
LLM_REPETITION_PENALTY | No | Repetition penalty parameter for the model (e.g., 1.05) | |
LLM_SYSTEM_PROMPT_ASK_QUESTION | No | System prompt for the ask_question tool | |
LLM_SYSTEM_PROMPT_GENERATE_CODE | No | System prompt for the generate_code tool | |
LLM_SYSTEM_PROMPT_GENERATE_DOCUMENTATION | No | System prompt for the generate_documentation tool |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
generate_code | Generate code based on a description |
generate_code_to_file | Generate code and write it directly to a file at a specific line number |
generate_documentation | Generate documentation for code |
ask_question | Ask a question to the LLM |