Skip to main content
Glama

prompt_from_file2file_tool

Process prompts from files through multiple LLM models and save responses to files for comparison and analysis.

Instructions

Read a prompt from a file, send it to multiple LLM models, and write responses to files. Args: file_path: Path to the file containing the prompt text models_prefixed_by_provider: List of models in format "provider:model" (e.g., "openai:gpt-4"). If None, defaults to ["openai:gpt-4o-mini"] output_dir: Directory where response files should be saved (defaults to input file's directory/responses) output_extension: File extension for output files (e.g., 'py', 'txt', 'md') If None, defaults to 'md' (default: None) output_path: Optional full output path with filename. If provided, the extension from this path will be used (overrides output_extension). Returns: List of file paths where responses were written

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes
models_prefixed_by_providerNo
output_dirNo
output_extensionNo
output_pathNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/danielscholl/agile-team-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server