Skip to main content
Glama

persona_dm_tool

Generate responses from multiple AI models and use a decision maker model to evaluate and select the best direction for agile team workflows.

Instructions

Generate responses from multiple LLM models and use a decision maker model to choose the best direction.

This tool first sends a prompt from a file to multiple models, then uses a designated
decision maker model to evaluate all responses and provide a final decision.

Args:
    from_file: Path to the file containing the prompt text
    models_prefixed_by_provider: List of team member models in format "provider:model" 
                                (if None, defaults to ["openai:gpt-4.1", "anthropic:claude-3-7-sonnet", "gemini:gemini-2.5-pro"])
    output_dir: Directory where response files should be saved (defaults to input file's directory/responses)
    output_extension: File extension for output files (e.g., 'py', 'txt', 'md')
    output_path: Optional full output path with filename for the persona document
    persona_dm_model: Model to use for making the decision (defaults to DEFAULT_DECISION_MAKER_MODEL)
    persona_prompt: Custom persona prompt template (if None, uses the default)

Returns:
    Path to the persona output file

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
from_fileYes
models_prefixed_by_providerNo
output_dirNo
output_extensionNo
output_pathNo
persona_dm_modelNoopenai:gpt-4o-mini
persona_promptNo<purpose> You are the decision maker of the agile team. You are given a list of responses from your team members. Your job is to take in the original question prompt, and each of the team members' responses, and choose the best direction for the team. </purpose> <instructions> <instruction>Each team member has proposed an answer to the question posed in the prompt.</instruction> <instruction>Given the original question prompt, and each of the team members' responses, choose the best answer.</instruction> <instruction>Tally the votes of the team members, choose the best direction, and explain why you chose it.</instruction> <instruction>To preserve anonymity, we will use model names instead of real names of your team members. When responding, use the model names in your response.</instruction> <instruction>As a decision maker, you breakdown the decision into several categories including: risk, reward, timeline, and resources. In addition to these guiding categories, you also consider the team members' expertise and experience. As a bleeding edge decision maker, you also invent new dimensions of decision making to help you make the best decision for your company.</instruction> <instruction>Your final decision maker response should be in markdown format with a comprehensive explanation of your decision. Start the top of the file with a title that says "Team Decision", include a table of contents, briefly describe the question/problem at hand then dive into several sections. One of your first sections should be a quick summary of your decision, then breakdown each of the team members' decisions into sections with your commentary on each. Where we lead into your decision with the categories of your decision making process, and then we lead into your final decision.</instruction> </instructions> <original-question>{original_prompt}</original-question> <team-decisions> {team_responses} </team-decisions>
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the two-step process (generate responses, then decide) and mentions file I/O (saving responses to files), but lacks critical details: whether this is a read-only or mutating operation, potential side effects (e.g., file system writes), error handling, performance characteristics, or authentication requirements. For a tool with 7 parameters and file operations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by detailed parameter explanations. It's appropriately sized for a complex tool with 7 parameters. The 'Args' and 'Returns' sections are efficiently organized. Minor improvements could include bolding key terms, but overall it's front-loaded and wastes no sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, multi-step LLM workflow, file I/O) and lack of annotations/output schema, the description is partially complete. It excels at parameter semantics but lacks behavioral context (safety, side effects, performance). The return value is documented ('Path to the persona output file'), but without an output schema, details about the file format or content are missing. For a tool of this complexity, more behavioral disclosure would be needed for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides detailed semantic explanations for all 7 parameters in the 'Args' section, including purpose, format examples, and default values. Since schema description coverage is 0% (titles only, no descriptions), the description fully compensates by adding essential meaning beyond the bare schema. Each parameter's role in the workflow is clearly explained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate responses from multiple LLM models and use a decision maker model to choose the best direction.' It specifies the verb ('generate responses', 'use a decision maker model') and resource ('multiple LLM models'), but doesn't explicitly differentiate from sibling tools like persona_ba_tool or persona_pm_tool, which likely have different roles in the persona workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools (e.g., persona_ba_tool, persona_pm_tool) or explain the context where this decision-making approach is preferred over simpler prompt tools like prompt_tool. The usage is implied through the description but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/danielscholl/agile-team-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server