Skip to main content
Glama

prompt_from_file_to_file

Send prompts from a file to multiple LLM models and store their responses in specified directories, using absolute paths for file input and output.

Instructions

Send a prompt from a file to multiple LLM models and save responses to files. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\path\to\file) for both file and output directory, not relative paths.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
abs_file_pathYesAbsolute path to the file containing the prompt (must be an absolute path, not relative)
abs_output_dirNoAbsolute directory path to save the response files to (must be an absolute path, not relative. Default: current directory).
models_prefixed_by_providerNoList of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models.

Implementation Reference

  • Core handler function: reads prompt from input file, prompts multiple models using prompt_from_file helper, saves each response to a markdown file in the output directory named after input stem and model.
    def prompt_from_file_to_file(
        abs_file_path: str, models_prefixed_by_provider: List[str] = None, abs_output_dir: str = "."
    ) -> List[str]:
        """
        Read text from a file, send it as prompt to multiple models, and save responses to files.
    
        Args:
            abs_file_path: Absolute path to the text file (must be an absolute path, not relative)
            models_prefixed_by_provider: List of model strings in format "provider:model"
                                        If None, uses the DEFAULT_MODELS environment variable
            abs_output_dir: Absolute directory path to save response files (must be an absolute path, not relative)
    
        Returns:
            List of paths to the output files
        """
        # Validate output directory
        output_path = Path(abs_output_dir)
        if not output_path.exists():
            output_path.mkdir(parents=True, exist_ok=True)
    
        if not output_path.is_dir():
            raise ValueError(f"Not a directory: {abs_output_dir}")
    
        # Get the base name of the input file
        input_file_name = Path(abs_file_path).stem
    
        # Get responses
        responses = prompt_from_file(abs_file_path, models_prefixed_by_provider)
    
        # Save responses to files
        output_files = []
    
        # Get the models that were actually used
        models_used = models_prefixed_by_provider
        if not models_used:
            default_models = os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL)
            models_used = [model.strip() for model in default_models.split(",")]
    
        for i, (model_string, response) in enumerate(zip(models_used, responses)):
            # Sanitize model string for filename (replace colons with underscores)
            safe_model_name = model_string.replace(":", "_")
    
            # Create output filename with .md extension
            output_file = output_path / f"{input_file_name}_{safe_model_name}.md"
    
            # Write response to file as markdown
            try:
                with open(output_file, "w", encoding="utf-8") as f:
                    f.write(response)
                output_files.append(str(output_file))
            except Exception as e:
                logger.error(f"Error writing response to {output_file}: {e}")
                output_files.append(f"Error: {str(e)}")
    
        return output_files
  • Pydantic input schema for validating tool arguments: input file path, optional models list, and output directory.
    class PromptFromFileToFileSchema(BaseModel):
        abs_file_path: str = Field(..., description="Absolute path to the file containing the prompt (must be an absolute path, not relative)")
        models_prefixed_by_provider: Optional[List[str]] = Field(
            None, 
            description="List of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models."
        )
        abs_output_dir: str = Field(
            default=".", 
            description="Absolute directory path to save the response files to (must be an absolute path, not relative. Default: current directory)"
        )
  • Tool registration in the MCP server's list_tools() method, specifying name, description, and input schema.
    Tool(
        name=JustPromptTools.PROMPT_FROM_FILE_TO_FILE,
        description="Send a prompt from a file to multiple LLM models and save responses to files. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\\path\\to\\file) for both file and output directory, not relative paths.",
        inputSchema=PromptFromFileToFileSchema.schema(),
    ),
  • Server-side dispatch in call_tool(): extracts arguments, calls the handler function, formats output file paths as response.
    elif name == JustPromptTools.PROMPT_FROM_FILE_TO_FILE:
        output_dir = arguments.get("abs_output_dir", ".")
        models_to_use = arguments.get("models_prefixed_by_provider")
        file_paths = prompt_from_file_to_file(
            arguments["abs_file_path"], 
            models_to_use,
            output_dir
        )
        return [TextContent(
            type="text",
            text=f"Responses saved to:\n" + "\n".join(file_paths)
        )]
  • Tool name constant defined in JustPromptTools class.
    PROMPT_FROM_FILE_TO_FILE = "prompt_from_file_to_file"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the absolute path requirement (a constraint) and that it processes 'multiple LLM models', but doesn't describe what happens during execution (e.g., sequential/parallel processing, error handling, file naming conventions, or what 'default models' means). For a tool with file I/O and model execution, this leaves significant behavioral gaps unaddressed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized (two sentences) and front-loaded with the core purpose. The 'IMPORTANT' note is relevant but could be integrated more smoothly. There's no wasted text, and every sentence adds value (purpose and critical constraint), though the structure is slightly abrupt with the all-caps emphasis.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (file I/O, model execution, batch processing) with no annotations and no output schema, the description is incomplete. It doesn't explain what the output files contain (e.g., raw responses, metadata), how errors are handled, what 'default models' are, or the execution behavior. For a tool with multiple parameters and significant side effects, this leaves too much unspecified for reliable agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it reinforces the absolute path requirement (already in schema descriptions) and mentions 'multiple LLM models' (implied by the array parameter). No additional syntax, format, or semantic details are provided beyond what's in the schema descriptions, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Send a prompt from a file to multiple LLM models and save responses to files') with specific resources (file input, file output, LLM models). It distinguishes from sibling 'prompt' (which likely takes direct input) and 'prompt_from_file' (which likely doesn't save to files), but doesn't explicitly name these alternatives. The purpose is specific but could be more precise about sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the 'IMPORTANT' note about absolute paths, suggesting this tool is for file-based batch processing. However, it doesn't explicitly state when to use this vs. 'prompt_from_file' (which likely processes from file but doesn't save to files) or 'prompt' (direct input). No explicit alternatives or exclusions are provided, leaving usage context somewhat implied rather than clearly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/disler/just-prompt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server