Skip to main content
Glama
georgejeffers

Gemini MCP Server

code_execution

Execute Python code in a sandboxed environment based on natural language prompts. Generate and run code, then return both the code and execution results.

Instructions

Execute Python code in a sandboxed environment. Gemini generates and runs code, returning both the code and results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesDescribe what code to write and execute
modelNoGemini model to usegemini-2.5-flash
temperatureNoSampling temperature
maxOutputTokensNoMaximum output tokens

Implementation Reference

  • Main handler function that executes Python code by calling Gemini's code execution tool. Takes prompt, model, temperature, and maxOutputTokens as input, processes the response to extract text, executable code, and execution results, then returns formatted output.
    async ({ prompt, model, temperature, maxOutputTokens }) => { try { const response = await ai.models.generateContent({ model, contents: prompt, config: { temperature, maxOutputTokens, tools: [{ codeExecution: {} }], }, }); const parts = response.candidates?.[0]?.content?.parts ?? []; const sections: string[] = []; for (const part of parts) { if (part.text) { sections.push(part.text); } else if (part.executableCode) { sections.push(`\`\`\`python\n${part.executableCode.code}\n\`\`\``); } else if (part.codeExecutionResult) { const outcome = part.codeExecutionResult.outcome ?? 'UNKNOWN'; sections.push(`**Execution result** (${outcome}):\n\`\`\`\n${part.codeExecutionResult.output}\n\`\`\``); } } return { content: [{ type: 'text' as const, text: sections.join('\n\n') }], }; } catch (error) { return formatToolError(error); } },
  • Registers the code_execution tool with the MCP server. Defines tool metadata (title, description, annotations), input schema using Zod validation, and connects the handler function.
    export function register(server: McpServer, ai: GoogleGenAI): void { server.registerTool( 'code_execution', { title: 'Code Execution', description: 'Execute Python code in a sandboxed environment. Gemini generates and runs code, returning both the code and results.', inputSchema: { prompt: z.string().min(1).describe('Describe what code to write and execute'), model: TextModel.default('gemini-2.5-flash').describe('Gemini model to use'), temperature: z.number().min(0).max(2).optional().describe('Sampling temperature'), maxOutputTokens: z.number().min(1).optional().describe('Maximum output tokens'), }, annotations: { readOnlyHint: true, destructiveHint: false, openWorldHint: true, }, }, async ({ prompt, model, temperature, maxOutputTokens }) => { try { const response = await ai.models.generateContent({ model, contents: prompt, config: { temperature, maxOutputTokens, tools: [{ codeExecution: {} }], }, }); const parts = response.candidates?.[0]?.content?.parts ?? []; const sections: string[] = []; for (const part of parts) { if (part.text) { sections.push(part.text); } else if (part.executableCode) { sections.push(`\`\`\`python\n${part.executableCode.code}\n\`\`\``); } else if (part.codeExecutionResult) { const outcome = part.codeExecutionResult.outcome ?? 'UNKNOWN'; sections.push(`**Execution result** (${outcome}):\n\`\`\`\n${part.codeExecutionResult.output}\n\`\`\``); } } return { content: [{ type: 'text' as const, text: sections.join('\n\n') }], }; } catch (error) { return formatToolError(error); } }, ); }
  • Input schema definition using Zod. Validates prompt (required string), model (text model enum with default), temperature (optional number 0-2), and maxOutputTokens (optional number).
    inputSchema: { prompt: z.string().min(1).describe('Describe what code to write and execute'), model: TextModel.default('gemini-2.5-flash').describe('Gemini model to use'), temperature: z.number().min(0).max(2).optional().describe('Sampling temperature'), maxOutputTokens: z.number().min(1).optional().describe('Maximum output tokens'), },
  • TextModel enum schema defining allowed Gemini models for text/code execution: gemini-2.5-flash, gemini-2.5-pro, and various preview versions. Used as model parameter in code_execution tool.
    export const TextModel = z.enum([ 'gemini-2.5-flash', 'gemini-2.5-pro', 'gemini-3-flash-preview', 'gemini-3-pro-preview', 'gemini-3.1-pro-preview', ]); export type TextModel = z.infer<typeof TextModel>;
  • formatToolError helper function that formats error objects into a standardized MCP response structure with isError flag, used by the code_execution handler for error handling.
    export function formatToolError(error: unknown) { const text = error instanceof Error ? error.message : String(error); return { content: [{ type: 'text' as const, text }], isError: true, }; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/georgejeffers/gemini-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server