code_execution
Execute Python code in a sandboxed environment based on natural language prompts. Generate and run code, then return both the code and execution results.
Instructions
Execute Python code in a sandboxed environment. Gemini generates and runs code, returning both the code and results.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Describe what code to write and execute | |
| model | No | Gemini model to use | gemini-2.5-flash |
| temperature | No | Sampling temperature | |
| maxOutputTokens | No | Maximum output tokens |
Implementation Reference
- src/tools/code-execution.ts:25-57 (handler)Main handler function that executes Python code by calling Gemini's code execution tool. Takes prompt, model, temperature, and maxOutputTokens as input, processes the response to extract text, executable code, and execution results, then returns formatted output.async ({ prompt, model, temperature, maxOutputTokens }) => { try { const response = await ai.models.generateContent({ model, contents: prompt, config: { temperature, maxOutputTokens, tools: [{ codeExecution: {} }], }, }); const parts = response.candidates?.[0]?.content?.parts ?? []; const sections: string[] = []; for (const part of parts) { if (part.text) { sections.push(part.text); } else if (part.executableCode) { sections.push(`\`\`\`python\n${part.executableCode.code}\n\`\`\``); } else if (part.codeExecutionResult) { const outcome = part.codeExecutionResult.outcome ?? 'UNKNOWN'; sections.push(`**Execution result** (${outcome}):\n\`\`\`\n${part.codeExecutionResult.output}\n\`\`\``); } } return { content: [{ type: 'text' as const, text: sections.join('\n\n') }], }; } catch (error) { return formatToolError(error); } },
- src/tools/code-execution.ts:7-59 (registration)Registers the code_execution tool with the MCP server. Defines tool metadata (title, description, annotations), input schema using Zod validation, and connects the handler function.export function register(server: McpServer, ai: GoogleGenAI): void { server.registerTool( 'code_execution', { title: 'Code Execution', description: 'Execute Python code in a sandboxed environment. Gemini generates and runs code, returning both the code and results.', inputSchema: { prompt: z.string().min(1).describe('Describe what code to write and execute'), model: TextModel.default('gemini-2.5-flash').describe('Gemini model to use'), temperature: z.number().min(0).max(2).optional().describe('Sampling temperature'), maxOutputTokens: z.number().min(1).optional().describe('Maximum output tokens'), }, annotations: { readOnlyHint: true, destructiveHint: false, openWorldHint: true, }, }, async ({ prompt, model, temperature, maxOutputTokens }) => { try { const response = await ai.models.generateContent({ model, contents: prompt, config: { temperature, maxOutputTokens, tools: [{ codeExecution: {} }], }, }); const parts = response.candidates?.[0]?.content?.parts ?? []; const sections: string[] = []; for (const part of parts) { if (part.text) { sections.push(part.text); } else if (part.executableCode) { sections.push(`\`\`\`python\n${part.executableCode.code}\n\`\`\``); } else if (part.codeExecutionResult) { const outcome = part.codeExecutionResult.outcome ?? 'UNKNOWN'; sections.push(`**Execution result** (${outcome}):\n\`\`\`\n${part.codeExecutionResult.output}\n\`\`\``); } } return { content: [{ type: 'text' as const, text: sections.join('\n\n') }], }; } catch (error) { return formatToolError(error); } }, ); }
- src/tools/code-execution.ts:13-18 (schema)Input schema definition using Zod. Validates prompt (required string), model (text model enum with default), temperature (optional number 0-2), and maxOutputTokens (optional number).inputSchema: { prompt: z.string().min(1).describe('Describe what code to write and execute'), model: TextModel.default('gemini-2.5-flash').describe('Gemini model to use'), temperature: z.number().min(0).max(2).optional().describe('Sampling temperature'), maxOutputTokens: z.number().min(1).optional().describe('Maximum output tokens'), },
- src/types.ts:3-10 (schema)TextModel enum schema defining allowed Gemini models for text/code execution: gemini-2.5-flash, gemini-2.5-pro, and various preview versions. Used as model parameter in code_execution tool.export const TextModel = z.enum([ 'gemini-2.5-flash', 'gemini-2.5-pro', 'gemini-3-flash-preview', 'gemini-3-pro-preview', 'gemini-3.1-pro-preview', ]); export type TextModel = z.infer<typeof TextModel>;
- src/utils/errors.ts:1-7 (helper)formatToolError helper function that formats error objects into a standardized MCP response structure with isError flag, used by the code_execution handler for error handling.export function formatToolError(error: unknown) { const text = error instanceof Error ? error.message : String(error); return { content: [{ type: 'text' as const, text }], isError: true, }; }