Skip to main content
Glama

prompt

Send a single prompt to multiple language models across providers like OpenAI, Anthropic, and Google Gemini using a unified interface. Ideal for comparing outputs efficiently.

Instructions

Send a prompt to multiple LLM models

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
models_prefixed_by_providerNoList of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models.
textYesThe prompt text

Implementation Reference

  • Main handler function that executes the 'prompt' tool: sends the input text to multiple specified or default LLM models in parallel, with model name correction and validation.
    def prompt(text: str, models_prefixed_by_provider: List[str] = None) -> List[str]:
        """
        Send a prompt to multiple models using parallel processing.
        
        Args:
            text: The prompt text
            models_prefixed_by_provider: List of model strings in format "provider:model"
                                        If None, uses the DEFAULT_MODELS environment variable
            
        Returns:
            List of responses from the models
        """
        # Use default models if no models provided
        if not models_prefixed_by_provider:
            default_models = os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL)
            models_prefixed_by_provider = [model.strip() for model in default_models.split(",")]
        # Validate model strings
        validate_models_prefixed_by_provider(models_prefixed_by_provider)
        
        # Prepare corrected model strings
        corrected_models = []
        for model_string in models_prefixed_by_provider:
            provider, model = split_provider_and_model(model_string)
            
            # Get correction model from environment
            correction_model = os.environ.get("CORRECTION_MODEL", DEFAULT_MODEL)
            
            # Check if model needs correction
            corrected_model = _correct_model_name(provider, model, correction_model)
            
            # Use corrected model
            if corrected_model != model:
                model_string = f"{provider}:{corrected_model}"
            
            corrected_models.append(model_string)
        
        # Process each model in parallel using ThreadPoolExecutor
        responses = []
        with concurrent.futures.ThreadPoolExecutor() as executor:
            # Submit all tasks
            future_to_model = {
                executor.submit(_process_model_prompt, model_string, text): model_string
                for model_string in corrected_models
            }
            
            # Collect results in order
            for model_string in corrected_models:
                for future, future_model in future_to_model.items():
                    if future_model == model_string:
                        responses.append(future.result())
                        break
        
        return responses
  • Pydantic input schema for the 'prompt' tool, defining 'text' and optional 'models_prefixed_by_provider'.
    class PromptSchema(BaseModel):
        text: str = Field(..., description="The prompt text")
        models_prefixed_by_provider: Optional[List[str]] = Field(
            None, 
            description="List of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models."
        )
  • Registration of the 'prompt' tool in the MCP server's list_tools() method, specifying name, description, and input schema.
    Tool(
        name=JustPromptTools.PROMPT,
        description="Send a prompt to multiple LLM models",
        inputSchema=PromptSchema.schema(),
    ),
  • Dispatch handler in MCP server's call_tool() method that invokes the 'prompt' function and formats the responses as TextContent.
    if name == JustPromptTools.PROMPT:
        models_to_use = arguments.get("models_prefixed_by_provider")
        responses = prompt(arguments["text"], models_to_use)
        
        # Get the model names that were actually used
        models_used = models_to_use if models_to_use else [model.strip() for model in os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL).split(",")]
        
        return [TextContent(
            type="text",
            text="\n".join([f"Model: {models_used[i]}\nResponse: {resp}" 
                          for i, resp in enumerate(responses)])
        )]
  • Enum-like class defining the tool name constant 'PROMPT = "prompt"' used in registrations.
    class JustPromptTools:
        PROMPT = "prompt"
        PROMPT_FROM_FILE = "prompt_from_file"
        PROMPT_FROM_FILE_TO_FILE = "prompt_from_file_to_file"
        CEO_AND_BOARD = "ceo_and_board"
        LIST_PROVIDERS = "list_providers"
        LIST_MODELS = "list_models"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('send a prompt') but lacks details on what happens: e.g., how models are selected, whether responses are returned or stored, any rate limits, authentication needs, or error handling. This is a significant gap for a tool interacting with external LLMs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded and directly states the tool's function without unnecessary elaboration, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of interacting with multiple LLM models, no annotations, and no output schema, the description is incomplete. It doesn't cover behavioral aspects like response format, error handling, or model selection logic, leaving gaps that could hinder effective tool use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description adds no additional meaning beyond what's in the schema (e.g., it doesn't explain the 'models_prefixed_by_provider' format further or provide examples beyond the schema's description). Baseline 3 is appropriate as the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Send a prompt to multiple LLM models' clearly states the action (send) and resource (prompt to LLM models), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'prompt_from_file' or 'prompt_from_file_to_file', which also involve sending prompts but with different input methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose this over 'prompt_from_file' (for file-based prompts) or 'prompt_from_file_to_file' (for file-to-file processing), nor does it specify any prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/disler/just-prompt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server