Skip to main content
Glama

Grok MCP

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
XAI_API_KEYYesX.AI API key from console.x.ai

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
list_models
Gets a list of all available Grok models from xAI. This is super handy when you need to see what models are available to use. You'll get the model ID, who owns it, and when it was created. Perfect for checking if a new model is available or verifying what models you can access. Returns a formatted string with all the model details.
generate_image
Creates AI-generated images using Grok's image generation models. Just describe what you want to see, and this tool will generate it. You can create multiple variations at once by adjusting the 'n' parameter. The model might revise your prompt to get better results - you'll see that in the response. Args: prompt: What you want the image to show (be descriptive!) n: How many images to generate (default is 1) response_format: Either "url" for image links or "b64_json" for base64 data model: Which image model to use (default is grok-2-image-1212) Returns a dict with the generated images and any revised prompt used.
chat_with_vision
Analyzes images and answers questions about them using Grok's vision models. This is your go-to tool when you need to understand what's in an image. You can provide local image files, URLs, or both. Ask questions like "What's in this image?" or "Read the text from this screenshot." Supports JPG, JPEG, and PNG formats. Args: prompt: Your question or instruction about the image(s) image_paths: List of local file paths to images (optional) image_urls: List of image URLs from the web (optional) detail: How closely to analyze ("auto", "low", or "high") model: Which vision-capable model to use (default is grok-4-0709) Returns the AI's response as a string describing or analyzing the images.
chat
Basic chat completion with Grok models - your standard conversational AI tool. Use this for general questions, creative writing, coding help, or any text task. You can optionally keep conversation history to maintain context across multiple exchanges. For reasoning models, use the reasoning_effort parameter. For other models, you have more control with penalties and stop sequences. Args: prompt: What you want to ask or have the AI do model: Which Grok model to use (default is grok-4-fast) system_prompt: Instructions for how the AI should behave (only used at start) use_conversation_history: Keep context between messages (default False) temperature: Creativity level 0-2 (higher = more creative) max_tokens: Maximum length of response top_p: Alternative to temperature for controlling randomness presence_penalty: Penalize talking about same topics (-2.0 to 2.0) frequency_penalty: Penalize repeating the same words (-2.0 to 2.0) stop: List of sequences where the AI should stop generating reasoning_effort: "low" or "high" for reasoning models only (grok-3-mini) Returns the AI's response as a string.
chat_with_reasoning
Uses Grok's reasoning models to think through complex problems step-by-step. This is perfect for math problems, logic puzzles, or anything requiring careful thinking. You'll get both the final answer AND the reasoning process that led to it. Think of it like showing your work in math class. Only works with reasoning models: grok-4, grok-3-mini, or grok-3-mini-fast. Args: prompt: The problem or question you need solved model: Which reasoning model to use (default is grok-3-mini) system_prompt: Instructions for how the AI should approach the problem reasoning_effort: "low" or "high" (only for grok-3-mini models, not grok-4) temperature: Controls randomness 0-2 (lower = more focused) max_tokens: Maximum length of response top_p: Alternative way to control randomness Returns a dict with 'content' (the answer), 'reasoning_content' (the thinking process), and 'usage' (token counts).
live_search
Searches the web in real-time and provides answers with sources. This is like having Grok browse the internet for you. It searches the web, news, X (Twitter), and even RSS feeds, then synthesizes everything into a comprehensive answer. You'll get citations so you can verify the information. Great for current events, fact-checking, or anything requiring up-to-date information. Args: prompt: Your question or search query model: Which Grok model to use (default is grok-4) mode: "on" to enable search (default), "off" to disable return_citations: Whether to include source links (default True) from_date: Start date for search results (YYYY-MM-DD format) to_date: End date for search results (YYYY-MM-DD format) max_search_results: How many sources to check (default 20) country: Filter results by country code (e.g., "us", "uk") rss_links: List of RSS feed URLs to include sources: Custom source configuration (overrides country/rss_links if provided) system_prompt: Instructions for how to handle the search results Returns a dict with 'content' (the answer), 'citations' (sources used), 'usage' (tokens), and 'num_sources_used'.
stateful_chat
Have ongoing conversations that are saved on xAI's servers for up to 30 days. Unlike regular chat, this maintains conversation history server-side, so you can continue conversations across sessions without managing history yourself. Start a new conversation without a response_id, then use the returned ID to continue it later. Super useful for long-running projects or when you want to pick up where you left off days later. Args: prompt: What you want to say in this turn of the conversation response_id: ID from a previous response to continue that conversation (optional) model: Which Grok model to use (default is grok-4) system_prompt: Instructions for the AI (only used when starting new conversation) include_reasoning: Get a summary of the model's thinking process (default False) temperature: Controls creativity 0-2 max_tokens: Maximum length of response Returns a dict with 'content' (the response), 'response_id' (save this to continue later!), 'status', 'model', 'usage', 'stored_until' (expiration date), and optionally 'reasoning' if you requested it.
retrieve_stateful_response
Fetches a previously saved stateful conversation response from xAI's servers. Use this to look up old conversations or check what was said in a previous exchange. Helpful if you lost track of a conversation or want to review past interactions. Works with any response_id from the last 30 days. Args: response_id: The ID of the response you want to retrieve Returns a dict with all the details: 'content', 'response_id', 'model', 'created_at', 'status', 'reasoning' (if available), 'usage', and 'previous_response_id' (if it was part of a chain).
delete_stateful_response
Permanently deletes a stateful conversation response from xAI's servers. Use this when you want to remove a conversation for privacy or cleanup purposes. Once deleted, you won't be able to retrieve it or continue from it. This is permanent, so make sure you really want to delete it! Args: response_id: The ID of the response you want to delete Returns a dict confirming the deletion with 'response_id', 'deleted' (True/False), and a confirmation message.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/merterbak/Grok-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server