Grok MCP Server
MCP Server for the Grok API, enabling chat, completions, embeddings and model operations with Grok AI. It is implemented using FastMCP for quick setup and tool registration. By default the server exposes an HTTP streaming endpoint on port 8080.
Features
Multiple Operation Types: Support for chat completions, text completions, embeddings, and model management
Comprehensive Error Handling: Clear error messages for common issues
Streaming Support: Real-time streaming responses for chat and completions
Multi-modal Inputs: Support for both text and image inputs in chat conversations
VSCode Integration: Seamless integration with Visual Studio Code
Tools
list_modelsList available models for the API
Returns: Array of available models with details
get_modelGet information about a specific model
Inputs:
model_id(string): The ID of the model to retrieve
Returns: Model details
create_chat_completionCreate a chat completion with Grok
Inputs:
model(string): ID of the model to usemessages(array): Chat messages, each withrole,contenttemperature(optional number): Sampling temperaturetop_p(optional number): Nucleus sampling parameter
n(optional number): Number of completions to generatemax_tokens(optional number): Maximum tokens to generatestream(optional boolean): Whether to stream responseslogit_bias(optional object): Map of token IDs to bias scoresresponse_format(optional object):{ type: "json_object" | "text" }seed(optional number): Seed for deterministic sampling
Returns: Generated chat completion response
create_completionCreate a text completion with Grok
Inputs:
model(string): ID of the model to useprompt(string): Text prompt to complete
temperature(optional number): Sampling temperaturemax_tokens(optional number): Maximum tokens to generatestream(optional boolean): Whether to stream responseslogit_bias(optional object): Map of token IDs to bias scoresseed(optional number): Seed for deterministic sampling
Returns: Generated text completion response
create_embeddingsCreate embeddings from input text
Inputs:
model(string): ID of the model to useinput(string or array): Text to embedencoding_format(optional string): Format of the embeddings
Returns: Vector embeddings of the input text
Related MCP server: Interactive Feedback MCP
Setup
Grok API Key
To use this server, you'll need a Grok API key:
Obtain a Grok API key from x.ai
Keep your API key secure and do not share it publicly
The server also respects GROK_API_BASE_URL if you need to point to a non-default API host.
Build
Build the project from source (optional for generating JavaScript output):
npm start runs the server with ts-node.
The HTTP server listens on http://localhost:8080/stream.
Development
For development with automatic rebuilding on file changes:
License
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.