Skip to main content
Glama

prompt

Send a single prompt to multiple language models across providers like OpenAI, Anthropic, and Google Gemini using a unified interface. Ideal for comparing outputs efficiently.

Instructions

Send a prompt to multiple LLM models

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
models_prefixed_by_providerNoList of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models.
textYesThe prompt text

Implementation Reference

  • Main handler function that executes the 'prompt' tool: sends the input text to multiple specified or default LLM models in parallel, with model name correction and validation.
    def prompt(text: str, models_prefixed_by_provider: List[str] = None) -> List[str]: """ Send a prompt to multiple models using parallel processing. Args: text: The prompt text models_prefixed_by_provider: List of model strings in format "provider:model" If None, uses the DEFAULT_MODELS environment variable Returns: List of responses from the models """ # Use default models if no models provided if not models_prefixed_by_provider: default_models = os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL) models_prefixed_by_provider = [model.strip() for model in default_models.split(",")] # Validate model strings validate_models_prefixed_by_provider(models_prefixed_by_provider) # Prepare corrected model strings corrected_models = [] for model_string in models_prefixed_by_provider: provider, model = split_provider_and_model(model_string) # Get correction model from environment correction_model = os.environ.get("CORRECTION_MODEL", DEFAULT_MODEL) # Check if model needs correction corrected_model = _correct_model_name(provider, model, correction_model) # Use corrected model if corrected_model != model: model_string = f"{provider}:{corrected_model}" corrected_models.append(model_string) # Process each model in parallel using ThreadPoolExecutor responses = [] with concurrent.futures.ThreadPoolExecutor() as executor: # Submit all tasks future_to_model = { executor.submit(_process_model_prompt, model_string, text): model_string for model_string in corrected_models } # Collect results in order for model_string in corrected_models: for future, future_model in future_to_model.items(): if future_model == model_string: responses.append(future.result()) break return responses
  • Pydantic input schema for the 'prompt' tool, defining 'text' and optional 'models_prefixed_by_provider'.
    class PromptSchema(BaseModel): text: str = Field(..., description="The prompt text") models_prefixed_by_provider: Optional[List[str]] = Field( None, description="List of models with provider prefixes (e.g., 'openai:gpt-4o' or 'o:gpt-4o'). If not provided, uses default models." )
  • Registration of the 'prompt' tool in the MCP server's list_tools() method, specifying name, description, and input schema.
    Tool( name=JustPromptTools.PROMPT, description="Send a prompt to multiple LLM models", inputSchema=PromptSchema.schema(), ),
  • Dispatch handler in MCP server's call_tool() method that invokes the 'prompt' function and formats the responses as TextContent.
    if name == JustPromptTools.PROMPT: models_to_use = arguments.get("models_prefixed_by_provider") responses = prompt(arguments["text"], models_to_use) # Get the model names that were actually used models_used = models_to_use if models_to_use else [model.strip() for model in os.environ.get("DEFAULT_MODELS", DEFAULT_MODEL).split(",")] return [TextContent( type="text", text="\n".join([f"Model: {models_used[i]}\nResponse: {resp}" for i, resp in enumerate(responses)]) )]
  • Enum-like class defining the tool name constant 'PROMPT = "prompt"' used in registrations.
    class JustPromptTools: PROMPT = "prompt" PROMPT_FROM_FILE = "prompt_from_file" PROMPT_FROM_FILE_TO_FILE = "prompt_from_file_to_file" CEO_AND_BOARD = "ceo_and_board" LIST_PROVIDERS = "list_providers" LIST_MODELS = "list_models"

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/disler/just-prompt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server