Skip to main content
Glama

code_executor

Execute code snippets to test algorithms, debug programs, or demonstrate programming concepts using Grok MCP's computational capabilities.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes
modelNogrok-4-1-fast-reasoning
max_turnsNo

Implementation Reference

  • The handler implementation for the code_executor MCP tool. It initializes an XAI client, runs a chat session with code execution tools enabled, and processes the response to include tool outputs.
    @mcp.tool()
    async def code_executor(
        prompt: str,
        model: str = "grok-4-1-fast-reasoning",
        max_turns: Optional[int] = None
    ):
        client = Client(api_key=XAI_API_KEY)
        
        chat_params = {"model": model, "tools": [code_execution()], "include": ["code_execution_call_output"]}
        if max_turns:
            chat_params["max_turns"] = max_turns
        
        chat = client.chat.create(**chat_params)
        chat.append(user(prompt))
        response = chat.sample()
        
        client.close()
    
        result = [response.content]
        if response.tool_outputs:
            result.append("\n\n**Code Output(s):**")
            for output in response.tool_outputs:
                result.append(f"```\n{output.message.content}\n```")
        return "\n".join(result)
    
    
    @mcp.tool()

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/merterbak/Grok-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server