Skip to main content
Glama

prompt_from_file_to_file

Send prompts from a file to multiple LLM models and store their responses in specified directories, using absolute paths for file input and output.

Instructions

Send a prompt from a file to multiple LLM models and save responses to files. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\path\to\file) for both file and output directory, not relative paths.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
abs_file_pathYesAbsolute path to the file containing the prompt (must be an absolute path, not relative)
abs_output_dirNoAbsolute directory path to save the response files to (must be an absolute path, not relative. Default: current directory).
models_prefixed_by_providerNoList of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models.

Implementation Reference

  • Core handler function: reads prompt from input file, prompts multiple models using prompt_from_file helper, saves each response to a markdown file in the output directory named after input stem and model.
    def prompt_from_file_to_file( abs_file_path: str, models_prefixed_by_provider: List[str] = None, abs_output_dir: str = "." ) -> List[str]: """ Read text from a file, send it as prompt to multiple models, and save responses to files. Args: abs_file_path: Absolute path to the text file (must be an absolute path, not relative) models_prefixed_by_provider: List of model strings in format "provider:model" If None, uses the DEFAULT_MODELS environment variable abs_output_dir: Absolute directory path to save response files (must be an absolute path, not relative) Returns: List of paths to the output files """ # Validate output directory output_path = Path(abs_output_dir) if not output_path.exists(): output_path.mkdir(parents=True, exist_ok=True) if not output_path.is_dir(): raise ValueError(f"Not a directory: {abs_output_dir}") # Get the base name of the input file input_file_name = Path(abs_file_path).stem # Get responses responses = prompt_from_file(abs_file_path, models_prefixed_by_provider) # Save responses to files output_files = [] # Get the models that were actually used models_used = models_prefixed_by_provider if not models_used: default_models = os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL) models_used = [model.strip() for model in default_models.split(",")] for i, (model_string, response) in enumerate(zip(models_used, responses)): # Sanitize model string for filename (replace colons with underscores) safe_model_name = model_string.replace(":", "_") # Create output filename with .md extension output_file = output_path / f"{input_file_name}_{safe_model_name}.md" # Write response to file as markdown try: with open(output_file, "w", encoding="utf-8") as f: f.write(response) output_files.append(str(output_file)) except Exception as e: logger.error(f"Error writing response to {output_file}: {e}") output_files.append(f"Error: {str(e)}") return output_files
  • Pydantic input schema for validating tool arguments: input file path, optional models list, and output directory.
    class PromptFromFileToFileSchema(BaseModel): abs_file_path: str = Field(..., description="Absolute path to the file containing the prompt (must be an absolute path, not relative)") models_prefixed_by_provider: Optional[List[str]] = Field( None, description="List of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models." ) abs_output_dir: str = Field( default=".", description="Absolute directory path to save the response files to (must be an absolute path, not relative. Default: current directory)" )
  • Tool registration in the MCP server's list_tools() method, specifying name, description, and input schema.
    Tool( name=JustPromptTools.PROMPT_FROM_FILE_TO_FILE, description="Send a prompt from a file to multiple LLM models and save responses to files. IMPORTANT: You MUST provide absolute paths (e.g., /path/to/file or C:\\path\\to\\file) for both file and output directory, not relative paths.", inputSchema=PromptFromFileToFileSchema.schema(), ),
  • Server-side dispatch in call_tool(): extracts arguments, calls the handler function, formats output file paths as response.
    elif name == JustPromptTools.PROMPT_FROM_FILE_TO_FILE: output_dir = arguments.get("abs_output_dir", ".") models_to_use = arguments.get("models_prefixed_by_provider") file_paths = prompt_from_file_to_file( arguments["abs_file_path"], models_to_use, output_dir ) return [TextContent( type="text", text=f"Responses saved to:\n" + "\n".join(file_paths) )]
  • Tool name constant defined in JustPromptTools class.
    PROMPT_FROM_FILE_TO_FILE = "prompt_from_file_to_file"

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/disler/just-prompt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server