Skip to main content
Glama

ceo_and_board

Facilitate decision-making by sending a prompt to multiple board member models and having a CEO model analyze and finalize the outcome. Requires absolute file and output directory paths.

Instructions

Send a prompt to multiple 'board member' models and have a 'CEO' model make a decision based on their responses. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\path\to\file) for both file and output directory, not relative paths.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
abs_file_pathYesAbsolute path to the file containing the prompt (must be an absolute path, not relative)
abs_output_dirNoAbsolute directory path to save the response files and CEO decision (must be an absolute path, not relative).
ceo_modelNoModel to use for the CEO decision in format 'provider:model'openai:o3
models_prefixed_by_providerNoList of models with provider prefixes to act as board members. If not provided, uses default models.

Implementation Reference

  • Core handler function that implements the 'ceo_and_board' tool logic: reads prompt from file, gets responses from board models, constructs CEO prompt, generates CEO decision using prompt(), and saves files.
    def ceo_and_board_prompt(
        abs_from_file: str,
        abs_output_dir: str = ".",
        models_prefixed_by_provider: List[str] = None,
        ceo_model: str = DEFAULT_CEO_MODEL,
        ceo_decision_prompt: str = DEFAULT_CEO_DECISION_PROMPT
    ) -> str:
        """
        Read text from a file, send it as prompt to multiple 'board member' models,
        and then have a 'CEO' model make a decision based on the responses.
    
        Args:
            abs_from_file: Absolute path to the text file containing the original prompt (must be an absolute path, not relative)
            abs_output_dir: Absolute directory path to save response files (must be an absolute path, not relative)
            models_prefixed_by_provider: List of model strings in format "provider:model"
                                       to act as the board members
            ceo_model: Model to use for the CEO decision in format "provider:model"
            ceo_decision_prompt: Template for the CEO decision prompt
    
        Returns:
            Path to the CEO decision file
        """
        # Validate output directory
        output_path = Path(abs_output_dir)
        if not output_path.exists():
            output_path.mkdir(parents=True, exist_ok=True)
    
        if not output_path.is_dir():
            raise ValueError(f"Not a directory: {abs_output_dir}")
    
        # Get the original prompt from the file
        try:
            with open(abs_from_file, 'r', encoding='utf-8') as f:
                original_prompt = f.read()
        except Exception as e:
            logger.error(f"Error reading file {abs_from_file}: {e}")
            raise ValueError(f"Error reading file: {str(e)}")
    
        # Step 1: Get board members' responses
        board_response_files = prompt_from_file_to_file(
            abs_file_path=abs_from_file,
            models_prefixed_by_provider=models_prefixed_by_provider,
            abs_output_dir=abs_output_dir
        )
    
        # Get the models that were actually used
        models_used = models_prefixed_by_provider
        if not models_used:
            default_models = os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL)
            models_used = [model.strip() for model in default_models.split(",")]
    
        # Step 2: Read in the board responses
        board_responses_text = ""
        for i, file_path in enumerate(board_response_files):
            model_name = models_used[i].replace(":", "_")
            try:
                with open(file_path, 'r', encoding='utf-8') as f:
                    response_content = f.read()
                    board_responses_text += f"""
    <board-response>
        <model-name>{models_used[i]}</model-name>
        <response>{response_content}</response>
    </board-response>
    """
            except Exception as e:
                logger.error(f"Error reading board response file {file_path}: {e}")
                board_responses_text += f"""
    <board-response>
        <model-name>{models_used[i]}</model-name>
        <response>Error reading response: {str(e)}</response>
    </board-response>
    """
    
        # Step 3: Prepare the CEO decision prompt
        final_ceo_prompt = ceo_decision_prompt.format(
            original_prompt=original_prompt,
            board_responses=board_responses_text
        )
    
        # Step 4: Save the CEO prompt to a file
        ceo_prompt_file = output_path / "ceo_prompt.xml"
        try:
            with open(ceo_prompt_file, "w", encoding="utf-8") as f:
                f.write(final_ceo_prompt)
        except Exception as e:
            logger.error(f"Error writing CEO prompt to {ceo_prompt_file}: {e}")
            raise ValueError(f"Error writing CEO prompt: {str(e)}")
        
        # Step 5: Get the CEO decision
        ceo_response = prompt(final_ceo_prompt, [ceo_model])[0]
    
        # Step 6: Write the CEO decision to a file
        ceo_output_file = output_path / "ceo_decision.md"
        try:
            with open(ceo_output_file, "w", encoding="utf-8") as f:
                f.write(ceo_response)
        except Exception as e:
            logger.error(f"Error writing CEO decision to {ceo_output_file}: {e}")
            raise ValueError(f"Error writing CEO decision: {str(e)}")
    
        return str(ceo_output_file)
  • Pydantic input schema for the 'ceo_and_board' tool, defining parameters like file path, board models, output dir, and CEO model.
    class CEOAndBoardSchema(BaseModel):
        abs_file_path: str = Field(..., description="Absolute path to the file containing the prompt (must be an absolute path, not relative)")
        models_prefixed_by_provider: Optional[List[str]] = Field(
            None, 
            description="List of models with provider prefixes to act as board members. If not provided, uses default models."
        )
        abs_output_dir: str = Field(
            default=".", 
            description="Absolute directory path to save the response files and CEO decision (must be an absolute path, not relative)"
        )
        ceo_model: str = Field(
            default=DEFAULT_CEO_MODEL,
            description="Model to use for the CEO decision in format 'provider:model'"
        )
  • MCP Tool registration in list_tools(), specifying name, description, and input schema.
    Tool(
        name=JustPromptTools.CEO_AND_BOARD,
        description="Send a prompt to multiple 'board member' models and have a 'CEO' model make a decision based on their responses. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\\path\\to\\file) for both file and output directory, not relative paths.",
        inputSchema=CEOAndBoardSchema.schema(),
    ),
  • Dispatch handler in call_tool() that extracts arguments and invokes the core ceo_and_board_prompt function, returning file paths.
    elif name == JustPromptTools.CEO_AND_BOARD:
        file_path = arguments["abs_file_path"]
        output_dir = arguments.get("abs_output_dir", ".")
        models_to_use = arguments.get("models_prefixed_by_provider")
        ceo_model = arguments.get("ceo_model", DEFAULT_CEO_MODEL)
        
        ceo_decision_file = ceo_and_board_prompt(
            abs_from_file=file_path,
            abs_output_dir=output_dir,
            models_prefixed_by_provider=models_to_use,
            ceo_model=ceo_model
        )
        
        # Get the CEO prompt file path
        ceo_prompt_file = str(Path(ceo_decision_file).parent / "ceo_prompt.xml")
        
        return [TextContent(
            type="text",
            text=f"Board responses and CEO decision saved.\nCEO prompt file: {ceo_prompt_file}\nCEO decision file: {ceo_decision_file}"
        )]
  • Default CEO model constant used in the handler and schema.
    DEFAULT_CEO_MODEL = "openai:o3"
Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/disler/just-prompt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server