prompt_from_file_tool
Read prompts from files and send them to multiple LLM models for comparison, streamlining agile development workflows with unified prompt delivery.
Instructions
Read a prompt from a file and send it to multiple LLM models.
Args:
file_path: Path to the file containing the prompt text
models_prefixed_by_provider: List of models in format "provider:model" (e.g., "openai:gpt-4").
If None, defaults to ["openai:gpt-4o-mini"]
Returns:
List of responses, one from each specified model
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| file_path | Yes | ||
| models_prefixed_by_provider | No |