Skip to main content
Glama

Outsource MCP

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
OLLAMA_HOSTNoHost URL for Ollama services (Local models)http://localhost:11434
XAI_API_KEYNoAPI key for xAI services (Grok models)
GROQ_API_KEYNoAPI key for Groq services (Llama 3, Mixtral, etc.)
LLAMA_API_KEYNoAPI key for Meta Llama services (Direct Meta access)
COHERE_API_KEYNoAPI key for Cohere services (Command models)
GOOGLE_API_KEYNoAPI key for Google services (Gemini Pro, Gemini Flash, etc.)
NVIDIA_API_KEYNoAPI key for NVIDIA services (Various models)
OPENAI_API_KEYNoAPI key for OpenAI services (GPT-4, GPT-3.5, DALL-E, etc.)
MISTRAL_API_KEYNoAPI key for Mistral AI services (Mistral Large, Medium, Small)
CEREBRAS_API_KEYNoAPI key for Cerebras services (Fast inference)
DEEPSEEK_API_KEYNoAPI key for DeepSeek services (DeepSeek Chat & Coder)
TOGETHER_API_KEYNoAPI key for Together AI services (Open source models)
ANTHROPIC_API_KEYNoAPI key for Anthropic services (Claude 3.5, Claude 3, etc.)
DEEPINFRA_API_KEYNoAPI key for DeepInfra services (Optimized models)
FIREWORKS_API_KEYNoAPI key for Fireworks AI services (Fast inference)
SAMBANOVA_API_KEYNoAPI key for SambaNova services (Enterprise models)
OPENROUTER_API_KEYNoAPI key for OpenRouter services (Multi-provider access)
PERPLEXITY_API_KEYNoAPI key for Perplexity services (Sonar models)
HUGGINGFACE_API_KEYNoAPI key for HuggingFace services (Open source models)
IBM_WATSONX_API_KEYNoAPI key for IBM WatsonX services (IBM models)

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
outsource_text
Delegate text generation to another AI model. Use this when you need capabilities or perspectives from a different model than yourself. Args: provider: The AI provider to use (e.g., "openai", "anthropic", "google", "groq") model: The specific model identifier (e.g., "gpt-4o", "claude-3-5-sonnet-20241022", "gemini-2.0-flash-exp") prompt: The instruction or query to send to the external model Returns: The text response from the external model, or an error message if the request fails Example usage: To get a different perspective: provider="anthropic", model="claude-3-5-sonnet-20241022", prompt="Analyze this problem from a different angle..." To leverage specialized models: provider="deepseek", model="deepseek-coder", prompt="Write optimized Python code for..."
outsource_image
Delegate image generation to an external AI model. Use this when you need to create visual content. Args: provider: The AI provider to use (currently only "openai" is supported) model: The image model to use ("dall-e-3" for high quality, "dall-e-2" for faster/cheaper) prompt: A detailed description of the image you want to generate Returns: The URL of the generated image, which can be shared with users or used in responses Example usage: For high-quality images: provider="openai", model="dall-e-3", prompt="A photorealistic rendering of..." For quick concepts: provider="openai", model="dall-e-2", prompt="A simple sketch showing..." Note: Only OpenAI currently supports image generation. Other providers will return an error.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gwbischof/outsource-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server