Skip to main content
Glama

Grok MCP

Grok-MCP

MCP server for xAI's Grok API, providing access to capabilities including image understanding, image generation, live web search, and reasoning models.

Features

  • Multiple Grok Models: Access to Grok-4, Grok-4-Fast, Grok-3-Mini, and more

  • Image Generation: Create images using Grok's image generation model

  • Vision Capabilities: Analyze images with Grok's vision models

  • Live Web Search: Real time web search with citations from news, web, X, and RSS feeds

  • Reasoning Models: Advanced reasoning with extended thinking models (Grok-3-Mini, Grok-4)

  • Stateful Conversations: Use this nrewly released feature to maintain conversation context as id across multiple requests

  • Conversation History: Set it on or off to use prior context

Prerequisites

  • Python 3.11 or higher

  • xAI API key (Get one here)

  • uv package manager

Installation

  1. Clone the repository:

git clone https://github.com/merterbak/Grok-MCP.git cd Grok-MCP
  1. Create a venv environment:

uv venv source .venv/bin/activate
  1. Installation:

uv pip install -e .

(Optional) or you can sync dependencies

uv sync --all-extras

Configuration

Claude Desktop Integration

Add this to your Claude Desktop configuration file:

{ "mcpServers": { "grok": { "command": "uv", "args": [ "--directory", "/path/to/Grok-MCP", "run", "python", "main.py" ], "env": { "XAI_API_KEY": "your_api_key_here" } } } }

Filesystem MCP (Optional)

Claude Desktop can't send uploaded images in the chat to an MCP tool. The easiest way to give access to files directly from your computer is official Filesystem MCP server. After setting it up you’ll be able to just write the image’s file path (such as /Users/mert/Desktop/image.png) in chat and Claude can use it with any vision chat tool.

{ "mcpServers": { "filesystem": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-filesystem", "/Users/<your-username>/Desktop", "/Users/<your-username>/Downloads" ] } } }

For stdio:

uv run python main.py

Docker:

docker compose up --build

Mcp Inspector:

mcp dev main.py

Available Tools

1. list_models

List all available Grok models with creation dates and ownership information.

2. chat

Standard chat completion with extensive customization options.

Parameters:

  • prompt (required): Your message

  • model: Model to use (default: "grok-4-fast")

  • system_prompt: Optional system instruction

  • use_conversation_history: Enable multi-turn conversations

  • temperature, max_tokens, top_p: Generation parameters

  • presence_penalty, frequency_penalty, stop: Advanced control

  • reasoning_effort: For reasoning models ("low" or "high")

3. chat_with_reasoning

Get detailed reasoning along with the response.

Parameters:

  • prompt (required): Your question or task

  • model: "grok-4", "grok-3-mini", or "grok-3-mini-fast"

  • reasoning_effort: "low" or "high" (not for grok-4)

  • system_prompt, temperature, max_tokens, top_p

Returns: Content, reasoning content, and usage statistics

4. chat_with_vision

Analyze images with natural language queries.

Parameters:

  • prompt (required): Your question about the image(s)

  • image_paths: List of local image file paths

  • image_urls: List of image URLs

  • detail: "auto", "low", or "high"

  • model: Vision-capable model (default: "grok-4-0709")

Supported formats: JPG, JPEG, PNG

5. generate_image

Create images from text descriptions.

Parameters:

  • prompt (required): Image description

  • n: Number of images to generate (default: 1)

  • response_format: "url" or "b64_json"

  • model: Image generation model (default: "grok-2-image-1212")

Returns: Generated images and revised prompt

6. live_search

Search the web in real-time with source citations.

Parameters:

  • prompt (required): Your search query

  • model: Model to use (default: "grok-4")

  • mode: "on" or "off"

  • return_citations: Include source citations (default: true)

  • from_date, to_date: Date range (YYYY-MM-DD)

  • max_search_results: Max results to fetch (default: 20)

  • country: Country code for localized search

  • rss_links: List of RSS feed URLs to search

  • sources: Custom source configuration

Returns: Content, citations, usage stats, and number of sources used

7. stateful_chat

Maintain conversation state across multiple requests on xAI servers.

Parameters:

  • prompt (required): Your message

  • response_id: Previous response ID to continue conversation

  • model: Model to use (default: "grok-4")

  • system_prompt: System instruction (only for new conversations)

  • include_reasoning: Include reasoning summary

  • temperature, max_tokens

Returns: Response with ID for continuing the conversation (stored for 30 days)

8. retrieve_stateful_response

Retrieve a previously stored conversation response.

Parameters:

  • response_id (required): The response ID to retrieve

9. delete_stateful_response

Delete a stored conversation from xAI servers.

Parameters:

  • response_id (required): The response ID to delete

License

This project is open source and available under the MIT License.

Related MCP Servers

  • -
    security
    -
    license
    -
    quality
    Model Context Protocol (MCP) server implementation that enables Claude Desktop to interact with Google's Gemini AI models.
    Last updated -
    226
    MIT License
    • Apple
    • Linux
  • -
    security
    -
    license
    -
    quality
    A Model Context Protocol server that gives Claude access to multiple AI models (Gemini, OpenAI, OpenRouter) for enhanced code analysis, problem-solving, and collaborative development through AI orchestration with conversations that continue across tasks.
    Last updated -
    9,242
    • Apple

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/merterbak/Grok-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server