Skip to main content
Glama

ceo_and_board

Facilitate decision-making by sending a prompt to multiple board member models and having a CEO model analyze and finalize the outcome. Requires absolute file and output directory paths.

Instructions

Send a prompt to multiple 'board member' models and have a 'CEO' model make a decision based on their responses. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\path\to\file) for both file and output directory, not relative paths.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
abs_file_pathYesAbsolute path to the file containing the prompt (must be an absolute path, not relative)
abs_output_dirNoAbsolute directory path to save the response files and CEO decision (must be an absolute path, not relative).
ceo_modelNoModel to use for the CEO decision in format 'provider:model'openai:o3
models_prefixed_by_providerNoList of models with provider prefixes to act as board members. If not provided, uses default models.

Implementation Reference

  • Core handler function that implements the 'ceo_and_board' tool logic: reads prompt from file, gets responses from board models, constructs CEO prompt, generates CEO decision using prompt(), and saves files.
    def ceo_and_board_prompt(
        abs_from_file: str,
        abs_output_dir: str = ".",
        models_prefixed_by_provider: List[str] = None,
        ceo_model: str = DEFAULT_CEO_MODEL,
        ceo_decision_prompt: str = DEFAULT_CEO_DECISION_PROMPT
    ) -> str:
        """
        Read text from a file, send it as prompt to multiple 'board member' models,
        and then have a 'CEO' model make a decision based on the responses.
    
        Args:
            abs_from_file: Absolute path to the text file containing the original prompt (must be an absolute path, not relative)
            abs_output_dir: Absolute directory path to save response files (must be an absolute path, not relative)
            models_prefixed_by_provider: List of model strings in format "provider:model"
                                       to act as the board members
            ceo_model: Model to use for the CEO decision in format "provider:model"
            ceo_decision_prompt: Template for the CEO decision prompt
    
        Returns:
            Path to the CEO decision file
        """
        # Validate output directory
        output_path = Path(abs_output_dir)
        if not output_path.exists():
            output_path.mkdir(parents=True, exist_ok=True)
    
        if not output_path.is_dir():
            raise ValueError(f"Not a directory: {abs_output_dir}")
    
        # Get the original prompt from the file
        try:
            with open(abs_from_file, 'r', encoding='utf-8') as f:
                original_prompt = f.read()
        except Exception as e:
            logger.error(f"Error reading file {abs_from_file}: {e}")
            raise ValueError(f"Error reading file: {str(e)}")
    
        # Step 1: Get board members' responses
        board_response_files = prompt_from_file_to_file(
            abs_file_path=abs_from_file,
            models_prefixed_by_provider=models_prefixed_by_provider,
            abs_output_dir=abs_output_dir
        )
    
        # Get the models that were actually used
        models_used = models_prefixed_by_provider
        if not models_used:
            default_models = os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL)
            models_used = [model.strip() for model in default_models.split(",")]
    
        # Step 2: Read in the board responses
        board_responses_text = ""
        for i, file_path in enumerate(board_response_files):
            model_name = models_used[i].replace(":", "_")
            try:
                with open(file_path, 'r', encoding='utf-8') as f:
                    response_content = f.read()
                    board_responses_text += f"""
    <board-response>
        <model-name>{models_used[i]}</model-name>
        <response>{response_content}</response>
    </board-response>
    """
            except Exception as e:
                logger.error(f"Error reading board response file {file_path}: {e}")
                board_responses_text += f"""
    <board-response>
        <model-name>{models_used[i]}</model-name>
        <response>Error reading response: {str(e)}</response>
    </board-response>
    """
    
        # Step 3: Prepare the CEO decision prompt
        final_ceo_prompt = ceo_decision_prompt.format(
            original_prompt=original_prompt,
            board_responses=board_responses_text
        )
    
        # Step 4: Save the CEO prompt to a file
        ceo_prompt_file = output_path / "ceo_prompt.xml"
        try:
            with open(ceo_prompt_file, "w", encoding="utf-8") as f:
                f.write(final_ceo_prompt)
        except Exception as e:
            logger.error(f"Error writing CEO prompt to {ceo_prompt_file}: {e}")
            raise ValueError(f"Error writing CEO prompt: {str(e)}")
        
        # Step 5: Get the CEO decision
        ceo_response = prompt(final_ceo_prompt, [ceo_model])[0]
    
        # Step 6: Write the CEO decision to a file
        ceo_output_file = output_path / "ceo_decision.md"
        try:
            with open(ceo_output_file, "w", encoding="utf-8") as f:
                f.write(ceo_response)
        except Exception as e:
            logger.error(f"Error writing CEO decision to {ceo_output_file}: {e}")
            raise ValueError(f"Error writing CEO decision: {str(e)}")
    
        return str(ceo_output_file)
  • Pydantic input schema for the 'ceo_and_board' tool, defining parameters like file path, board models, output dir, and CEO model.
    class CEOAndBoardSchema(BaseModel):
        abs_file_path: str = Field(..., description="Absolute path to the file containing the prompt (must be an absolute path, not relative)")
        models_prefixed_by_provider: Optional[List[str]] = Field(
            None, 
            description="List of models with provider prefixes to act as board members. If not provided, uses default models."
        )
        abs_output_dir: str = Field(
            default=".", 
            description="Absolute directory path to save the response files and CEO decision (must be an absolute path, not relative)"
        )
        ceo_model: str = Field(
            default=DEFAULT_CEO_MODEL,
            description="Model to use for the CEO decision in format 'provider:model'"
        )
  • MCP Tool registration in list_tools(), specifying name, description, and input schema.
    Tool(
        name=JustPromptTools.CEO_AND_BOARD,
        description="Send a prompt to multiple 'board member' models and have a 'CEO' model make a decision based on their responses. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\\path\\to\\file) for both file and output directory, not relative paths.",
        inputSchema=CEOAndBoardSchema.schema(),
    ),
  • Dispatch handler in call_tool() that extracts arguments and invokes the core ceo_and_board_prompt function, returning file paths.
    elif name == JustPromptTools.CEO_AND_BOARD:
        file_path = arguments["abs_file_path"]
        output_dir = arguments.get("abs_output_dir", ".")
        models_to_use = arguments.get("models_prefixed_by_provider")
        ceo_model = arguments.get("ceo_model", DEFAULT_CEO_MODEL)
        
        ceo_decision_file = ceo_and_board_prompt(
            abs_from_file=file_path,
            abs_output_dir=output_dir,
            models_prefixed_by_provider=models_to_use,
            ceo_model=ceo_model
        )
        
        # Get the CEO prompt file path
        ceo_prompt_file = str(Path(ceo_decision_file).parent / "ceo_prompt.xml")
        
        return [TextContent(
            type="text",
            text=f"Board responses and CEO decision saved.\nCEO prompt file: {ceo_prompt_file}\nCEO decision file: {ceo_decision_file}"
        )]
  • Default CEO model constant used in the handler and schema.
    DEFAULT_CEO_MODEL = "openai:o3"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the need for absolute paths and mentions the CEO decision process, but lacks details on behavioral traits such as error handling, rate limits, authentication needs, or what the output looks like (e.g., file formats, decision format). For a tool with 4 parameters and no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: the first explains the core functionality, and the second provides a critical usage note. It's front-loaded with the main purpose, and the 'IMPORTANT' section adds necessary guidance without redundancy. However, the second sentence could be integrated more smoothly, slightly affecting structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-model decision-making with file I/O), no annotations, and no output schema, the description is incomplete. It doesn't explain the output format, how the CEO decision is derived, error cases, or dependencies on other tools. For a tool with this functionality, more context is needed to ensure proper usage by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by emphasizing absolute paths in a note, but doesn't provide additional semantic context like examples or rationale for parameter choices. With high schema coverage, the baseline is 3, and the description meets this without compensating further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Send a prompt to multiple 'board member' models and have a 'CEO' model make a decision based on their responses.' It specifies the verb ('send'), resource ('prompt'), and outcome ('CEO model make a decision'), distinguishing it from simpler prompt tools like 'prompt' or 'prompt_from_file'. However, it doesn't explicitly differentiate from 'prompt_from_file_to_file' which also involves file-based prompting with output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing multi-model consensus with a CEO decision-maker, as opposed to single-model prompts. It includes an 'IMPORTANT' note about absolute paths, which provides some context. However, it doesn't explicitly state when to use this tool versus alternatives like 'prompt_from_file_to_file' or under what scenarios the board/CEO metaphor is beneficial, leaving usage somewhat implied rather than clearly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/disler/just-prompt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server