Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| OLLAMA_HOST | No | The host URL for the Ollama server. | http://localhost:11434 |
| GROQ_API_KEY | Yes | API key for Groq models. | |
| DEFAULT_MODEL | No | The default model to use for prompts. | openai:gpt-4o-mini |
| GEMINI_API_KEY | Yes | API key for Google Gemini models. | |
| OPENAI_API_KEY | Yes | API key for OpenAI models. | |
| DEEPSEEK_API_KEY | Yes | API key for DeepSeek models. | |
| ANTHROPIC_API_KEY | Yes | API key for Anthropic models. | |
| DEFAULT_TEAM_MODELS | No | A JSON array string representing the default list of models used by the agile team. | ["openai:gpt-4.1","anthropic:claude-3-7-sonnet","gemini:gemini-2.5-pro"] |
| DEFAULT_DECISION_MAKER_MODEL | No | The default model used to make decisions in team workflows. | openai:gpt-4o-mini |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| prompt_tool | Send a text prompt to multiple LLM models and return their responses.
Args:
text: The prompt text to send to the models
models_prefixed_by_provider: List of models in format "provider:model" (e.g., "openai:gpt-4").
If None, defaults to ["openai:gpt-4o-mini"]
Returns:
List of responses, one from each specified model |
| prompt_from_file_tool | Read a prompt from a file and send it to multiple LLM models.
Args:
file_path: Path to the file containing the prompt text
models_prefixed_by_provider: List of models in format "provider:model" (e.g., "openai:gpt-4").
If None, defaults to ["openai:gpt-4o-mini"]
Returns:
List of responses, one from each specified model |
| prompt_from_file2file_tool | Read a prompt from a file, send it to multiple LLM models, and write responses to files.
Args:
file_path: Path to the file containing the prompt text
models_prefixed_by_provider: List of models in format "provider:model" (e.g., "openai:gpt-4").
If None, defaults to ["openai:gpt-4o-mini"]
output_dir: Directory where response files should be saved (defaults to input file's directory/responses)
output_extension: File extension for output files (e.g., 'py', 'txt', 'md')
If None, defaults to 'md' (default: None)
output_path: Optional full output path with filename. If provided, the extension
from this path will be used (overrides output_extension).
Returns:
List of file paths where responses were written |
| list_providers_tool | List all supported LLM providers.
Returns:
Dictionary with main providers and their shortcuts clearly formatted |
| list_models_tool | List all available models for a specific provider.
Args:
provider: The provider to list models for (e.g., "openai", "anthropic")
Returns:
List of model names available for the specified provider |
| persona_dm_tool | Generate responses from multiple LLM models and use a decision maker model to choose the best direction.
This tool first sends a prompt from a file to multiple models, then uses a designated
decision maker model to evaluate all responses and provide a final decision.
Args:
from_file: Path to the file containing the prompt text
models_prefixed_by_provider: List of team member models in format "provider:model"
(if None, defaults to ["openai:gpt-4.1", "anthropic:claude-3-7-sonnet", "gemini:gemini-2.5-pro"])
output_dir: Directory where response files should be saved (defaults to input file's directory/responses)
output_extension: File extension for output files (e.g., 'py', 'txt', 'md')
output_path: Optional full output path with filename for the persona document
persona_dm_model: Model to use for making the decision (defaults to DEFAULT_DECISION_MAKER_MODEL)
persona_prompt: Custom persona prompt template (if None, uses the default)
Returns:
Path to the persona output file |
| persona_ba_tool | Generate business analysis using a specialized Business Analyst persona, with optional decision making.
This tool uses a specialized Business Analyst prompt to analyze business requirements
from a file. It can either use a single model or leverage the team decision-making
functionality to get multiple perspectives and consolidate them.
Args:
from_file: Path to the file containing the business requirements
models_prefixed_by_provider: List of models in format "provider:model"
(if None, defaults to DEFAULT_MODEL)
output_dir: Directory where response files should be saved (defaults to input file's directory/responses)
output_extension: File extension for output files (e.g., 'py', 'txt', 'md')
output_path: Optional full output path with filename for the output document
use_decision_maker: Whether to use the decision maker functionality
decision_maker_models: Models to use if use_decision_maker is True
(if None, defaults to DEFAULT_TEAM_MODELS)
ba_prompt: Custom business analyst prompt template
decision_maker_model: Model to use for decision making (defaults to DEFAULT_DECISION_MAKER_MODEL)
decision_maker_prompt: Custom persona prompt template for decision making
Returns:
Path to the business analysis output file |
| persona_pm_tool | Generate product management plans using a specialized Product Manager persona, with optional decision making.
This tool uses a specialized Product Manager prompt to create comprehensive product plans
from a file. It can either use a single model or leverage the team decision-making
functionality to get multiple perspectives and consolidate them.
Args:
from_file: Path to the file containing the product requirements
models_prefixed_by_provider: List of models in format "provider:model"
(if None, defaults to DEFAULT_MODEL)
output_dir: Directory where response files should be saved (defaults to input file's directory/responses)
output_extension: File extension for output files (e.g., 'py', 'txt', 'md')
output_path: Optional full output path with filename for the output document
use_decision_maker: Whether to use the decision maker functionality
decision_maker_models: Models to use if use_decision_maker is True
(if None, defaults to DEFAULT_TEAM_MODELS)
pm_prompt: Custom product manager prompt template
decision_maker_model: Model to use for decision making (defaults to DEFAULT_DECISION_MAKER_MODEL)
decision_maker_prompt: Custom persona prompt template for decision making
Returns:
Path to the product plan output file |
| persona_sw_tool | Generate specification documents using a specialized Spec Writer persona, with optional decision making.
This tool uses a specialized Spec Writer prompt to create comprehensive specification documents
from a file. It can either use a single model or leverage the team decision-making
functionality to get multiple perspectives and consolidate them.
Args:
from_file: Path to the file containing the requirements or PRD
models_prefixed_by_provider: List of models in format "provider:model"
(if None, defaults to DEFAULT_MODEL)
output_dir: Directory where response files should be saved (defaults to input file's directory/responses)
output_extension: File extension for output files (e.g., 'py', 'txt', 'md')
output_path: Optional full output path with filename for the output document
use_decision_maker: Whether to use the decision maker functionality
decision_maker_models: Models to use if use_decision_maker is True
(if None, defaults to DEFAULT_TEAM_MODELS)
sw_prompt: Custom spec writer prompt template
decision_maker_model: Model to use for decision making (defaults to DEFAULT_DECISION_MAKER_MODEL)
decision_maker_prompt: Custom persona prompt template for decision making
Returns:
Path to the specification output file |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| list_mcp_assets | List MCP Assets prompt for comprehensive server capability overview. Provides dynamic listing of all available prompts, tools, and resources with usage examples and quick start guidance. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |