prompt_from_file2file_tool
Process prompts from files through multiple LLM models and save responses to files for comparison and analysis.
Instructions
Read a prompt from a file, send it to multiple LLM models, and write responses to files.
Args:
file_path: Path to the file containing the prompt text
models_prefixed_by_provider: List of models in format "provider:model" (e.g., "openai:gpt-4").
If None, defaults to ["openai:gpt-4o-mini"]
output_dir: Directory where response files should be saved (defaults to input file's directory/responses)
output_extension: File extension for output files (e.g., 'py', 'txt', 'md')
If None, defaults to 'md' (default: None)
output_path: Optional full output path with filename. If provided, the extension
from this path will be used (overrides output_extension).
Returns:
List of file paths where responses were written
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| file_path | Yes | ||
| models_prefixed_by_provider | No | ||
| output_dir | No | ||
| output_extension | No | ||
| output_path | No |