Skip to main content
Glama

outsource_text

Delegate text generation to external AI models for different capabilities or perspectives. Access multiple providers through a unified interface.

Instructions

Delegate text generation to another AI model. Use this when you need capabilities
or perspectives from a different model than yourself.

Args:
    provider: The AI provider to use (e.g., "openai", "anthropic", "google", "groq")
    model: The specific model identifier (e.g., "gpt-4o", "claude-3-5-sonnet-20241022", "gemini-2.0-flash-exp")
    prompt: The instruction or query to send to the external model

Returns:
    The text response from the external model, or an error message if the request fails

Example usage:
    To get a different perspective: provider="anthropic", model="claude-3-5-sonnet-20241022", prompt="Analyze this problem from a different angle..."
    To leverage specialized models: provider="deepseek", model="deepseek-coder", prompt="Write optimized Python code for..."

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
providerYes
modelYes
promptYes

Implementation Reference

  • The core handler function for the 'outsource_text' tool. Decorated with @mcp.tool() for automatic registration and schema inference from type hints and docstring. It maps the provider to a model class, creates an Agent, and executes the prompt to return the generated text.
    @mcp.tool()
    async def outsource_text(provider: str, model: str, prompt: str) -> str:
        """
        Delegate text generation to another AI model. Use this when you need capabilities
        or perspectives from a different model than yourself.
    
        Args:
            provider: The AI provider to use (e.g., "openai", "anthropic", "google", "groq")
            model: The specific model identifier (e.g., "gpt-4o", "claude-3-5-sonnet-20241022", "gemini-2.0-flash-exp")
            prompt: The instruction or query to send to the external model
    
        Returns:
            The text response from the external model, or an error message if the request fails
    
        Example usage:
            To get a different perspective: provider="anthropic", model="claude-3-5-sonnet-20241022", prompt="Analyze this problem from a different angle..."
            To leverage specialized models: provider="deepseek", model="deepseek-coder", prompt="Write optimized Python code for..."
        """
        try:
            # Get the appropriate model class based on provider
            provider_lower = provider.lower()
    
            if provider_lower not in PROVIDER_MODEL_MAP:
                raise ValueError(f"Unknown provider: {provider}")
    
            model_class = PROVIDER_MODEL_MAP[provider_lower]
    
            # Create the agent
            agent = Agent(
                model=model_class(id=model),
                name="Text Generation Agent",
                instructions="You are a helpful AI assistant. Respond to the user's prompt directly and concisely.",
            )
    
            # Run the agent and get response
            response = await agent.arun(prompt)
    
            # Extract the text content from the response
            if hasattr(response, "content"):
                return response.content
            else:
                return str(response)
    
        except Exception as e:
            return f"Error generating text: {str(e)}"
  • Global dictionary mapping lowercase provider names to their corresponding model classes from the 'agno' library, used within the outsource_text handler to dynamically instantiate the correct model based on user input.
    # Provider to model class mapping
    PROVIDER_MODEL_MAP = {
        "openai": OpenAIChat,
        "anthropic": Claude,
        "google": Gemini,
        "groq": Groq,
        "deepseek": DeepSeek,
        "xai": xAI,
        "perplexity": Perplexity,
        "cohere": Cohere,
        "fireworks": Fireworks,
        "huggingface": HuggingFace,
        "mistral": MistralChat,
        "nvidia": Nvidia,
        "ollama": Ollama,
        "openrouter": OpenRouter,
        "sambanova": Sambanova,
        "together": Together,
        "litellm": LiteLLM,
        "vercel": v0,
        "v0": v0,
        "aws": AwsBedrock,
        "bedrock": AwsBedrock,
        "azure": AzureAIFoundry,
        "cerebras": Cerebras,
        "meta": Llama,
        "deepinfra": DeepInfra,
        "ibm": WatsonX,
        "watsonx": WatsonX,
    }
  • server.py:64-64 (registration)
    The @mcp.tool() decorator registers the outsource_text function as an MCP tool, automatically generating schema from signature and docstring.
    @mcp.tool()
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it delegates to external models, returns text responses or error messages, and implies it's a read-only operation (no destructive effects mentioned). However, it doesn't cover rate limits, authentication needs, or detailed error handling beyond 'if the request fails,' leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and usage guidelines. Each sentence adds value, such as parameter explanations and examples, with no wasted words. The structure is logical and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (delegation to external AI models) and lack of annotations and output schema, the description is mostly complete. It covers purpose, usage, parameters, and returns, but could benefit from more details on error types or operational constraints. However, it's sufficient for basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds significant meaning beyond the input schema by explaining each parameter's purpose with examples: 'provider' specifies AI providers like 'openai', 'model' is the specific identifier, and 'prompt' is the instruction to send. This fully compensates for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs and resources: 'Delegate text generation to another AI model.' It distinguishes from the sibling tool 'outsource_image' by specifying 'text generation' versus image-related tasks. The purpose is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this when you need capabilities or perspectives from a different model than yourself.' It provides clear context for usage, including example scenarios like getting different perspectives or leveraging specialized models, without misleading guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gwbischof/outsource-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server