Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
AI_PROVIDER | No | The AI provider to use (options: openai, ollama) | openai |
OLLAMA_MODEL | No | The Ollama model to use | llama2 |
OPENAI_MODEL | No | The OpenAI model to use | o1-preview |
OPENAI_API_KEY | No | Your OpenAI API key | |
OLLAMA_BASE_URL | No | The base URL for the Ollama API | http://localhost:11434 |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
generate_spec | Generate a specification document using OpenAI O3 model |
review_spec | Review a specification for completeness and provide critical feedback |
review_code | Review code changes and provide feedback |
run_tests | Run standardized tests for the project (with coverage when no pattern specified) |
run_linter | Run standardized linter for the project |
notify | Provide audio notifications to users (macOS only) |
music | Control Spotify for background music (macOS only) |
memory | Store and retrieve temporary key-value pairs in memory (data is lost on MCP server restart) |