prompt_from_file
Generate responses from multiple LLM models by sending a prompt stored in a file. Specify the absolute file path and optional models to streamline model testing and integration.
Instructions
Send a prompt from a file to multiple LLM models. IMPORTANT: You MUST provide an absolute file path (e.g., /path/to/file or C:\path\to\file), not a relative path.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| abs_file_path | Yes | Absolute path to the file containing the prompt (must be an absolute path, not relative) | |
| models_prefixed_by_provider | No | List of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models. |
Implementation Reference
- Core handler function that reads the content from the specified absolute file path, validates the file, and delegates to the prompt function with multiple models.def prompt_from_file(abs_file_path: str, models_prefixed_by_provider: List[str] = None) -> List[str]: """ Read text from a file and send it as a prompt to multiple models. Args: abs_file_path: Absolute path to the text file (must be an absolute path, not relative) models_prefixed_by_provider: List of model strings in format "provider:model" If None, uses the DEFAULT_MODELS environment variable Returns: List of responses from the models """ file_path = Path(abs_file_path) # Validate file if not file_path.exists(): raise FileNotFoundError(f"File not found: {abs_file_path}") if not file_path.is_file(): raise ValueError(f"Not a file: {abs_file_path}") # Read file content try: with open(file_path, 'r', encoding='utf-8') as f: text = f.read() except Exception as e: logger.error(f"Error reading file {abs_file_path}: {e}") raise ValueError(f"Error reading file: {str(e)}") # Send prompt with file content return prompt(text, models_prefixed_by_provider)
- src/just_prompt/server.py:52-57 (schema)Pydantic input schema for validating the tool arguments: absolute file path and optional list of models.class PromptFromFileSchema(BaseModel): abs_file_path: str = Field(..., description="Absolute path to the file containing the prompt (must be an absolute path, not relative)") models_prefixed_by_provider: Optional[List[str]] = Field( None, description="List of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models." )
- src/just_prompt/server.py:127-131 (registration)MCP Tool registration in the list_tools() method, specifying name, description, and input schema.Tool( name=JustPromptTools.PROMPT_FROM_FILE, description="Send a prompt from a file to multiple LLM models. IMPORTANT: You MUST provide an absolute file path (e.g., /path/to/file or C:\\path\\to\\file), not a relative path.", inputSchema=PromptFromFileSchema.schema(), ),
- src/just_prompt/server.py:173-184 (handler)Dispatch handler in the MCP call_tool method that invokes the prompt_from_file function and formats the responses as TextContent.elif name == JustPromptTools.PROMPT_FROM_FILE: models_to_use = arguments.get("models_prefixed_by_provider") responses = prompt_from_file(arguments["abs_file_path"], models_to_use) # Get the model names that were actually used models_used = models_to_use if models_to_use else [model.strip() for model in os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL).split(",")] return [TextContent( type="text", text="\n".join([f"Model: {models_used[i]}\nResponse: {resp}" for i, resp in enumerate(responses)]) )]
- src/just_prompt/server.py:38-38 (registration)Tool name constant definition in JustPromptTools enum-like class.PROMPT_FROM_FILE = "prompt_from_file"