Skip to main content
Glama
alohays

openai-tool2mcp

by alohays

code-execution

Execute code snippets and receive immediate results through the MCP server, enabling code testing and debugging within AI workflows.

Instructions

Execute code and return the result

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
parametersYes

Implementation Reference

  • CodeInterpreterAdapter class provides the core handler logic for the 'code-execution' tool, including translating MCP requests to OpenAI code interpreter parameters and responses back to MCP format.
    class CodeInterpreterAdapter(ToolAdapter):
        """Adapter for OpenAI's code interpreter tool"""
    
        @property
        def tool_id(self) -> str:
            """Get the MCP tool ID"""
            return "code-execution"
    
        @property
        def openai_tool_type(self) -> str:
            """Get the OpenAI tool type"""
            return "code_interpreter"
    
        @property
        def description(self) -> str:
            """Get the tool description"""
            return "Execute code and return the result"
    
        async def translate_request(self, request: MCPRequest) -> dict:
            """
            Translate MCP request to OpenAI parameters
    
            Args:
                request: The MCP request to translate
    
            Returns:
                Dictionary of OpenAI parameters
            """
            # Extract code to execute
            code = request.parameters.get("code", "")
            language = request.parameters.get("language", "python")
    
            logger.debug(f"Translating code execution request with language: {language}")
    
            # Return OpenAI parameters
            return {"code": code, "language": language}
    
        async def translate_response(self, response: dict) -> MCPResponse:
            """
            Translate OpenAI response to MCP response
    
            Args:
                response: The OpenAI response to translate
    
            Returns:
                MCP response object
            """
            # Extract execution result
            result = response.get("result", {})
    
            logger.debug("Translating code execution response")
    
            # Format result as markdown
            content = format_code_result(result)
    
            # Check for errors
            error = None
            if isinstance(result, dict) and "error" in result:
                error = result["error"]
    
            # Return MCP response
            return MCPResponse(content=content, error=error, context={"language": response.get("language", "python")})
  • Registration of the 'code-execution' tool in ToolRegistry, mapping it to OpenAI's code_interpreter tool.
    "code-execution": {
        "openai_tool": OpenAIBuiltInTools.CODE_INTERPRETER.value,
        "enabled": OpenAIBuiltInTools.CODE_INTERPRETER.value in self.enabled_tools,
        "description": "Execute code in a sandbox environment",
    },
  • Instantiation of CodeInterpreterAdapter and mapping to tool_id 'code-execution' in the server's tools_map.
    def _build_tools_map(self):
        """Build a map of tool adapters"""
        tools_map = {}
    
        # Register default tool adapters
        adapters = [WebSearchAdapter(), CodeInterpreterAdapter(), BrowserAdapter(), FileManagerAdapter()]
    
        for adapter in adapters:
            # Only register if the tool is enabled
            if adapter.openai_tool_type in self.config.tools:
                tools_map[adapter.tool_id] = adapter
    
        return tools_map
  • Dynamic registration of MCP tools including 'code-execution' using @mcp.tool decorator, with a generic handler that delegates to the specific adapter's translate methods and invokes OpenAI.
    def _register_mcp_tools(self):
        """Register tools with the MCP SDK"""
        for tool_id, adapter in self.tools_map.items():
            # Define a tool handler for each adapter
            # Create a closure to properly capture the values
            def create_tool_handler(tool_id=tool_id, adapter=adapter):
                @self.mcp.tool(name=tool_id, description=adapter.description)
                async def tool_handler(**parameters):
                    """
                    MCP tool handler for OpenAI tools.
                    """
                    # Create an MCP request from the parameters
                    mcp_request = MCPRequest(parameters=parameters)
    
                    # Translate the request parameters using the adapter
                    translated_params = await adapter.translate_request(mcp_request)
    
                    # Create an OpenAI tool request
                    openai_request = mcp_to_openai.translate_request(mcp_request, tool_id)
    
                    # Override the parameters with the adapter-specific ones
                    openai_request.parameters = translated_params
    
                    try:
                        # Call OpenAI API to execute the tool
                        openai_response = await self.openai_client.invoke_tool(openai_request)
    
                        # Translate the OpenAI response to MCP format using the adapter
                        if openai_response.tool_outputs:
                            # Use the adapter to translate the tool-specific response
                            mcp_response = await adapter.translate_response(openai_response.tool_outputs[0].output)
    
                            # Add thread_id to context for state management
                            if mcp_response.context is None:
                                mcp_response.context = {}
                            mcp_response.context["thread_id"] = openai_response.thread_id
    
                            # Return the response content which will be used by MCP SDK
                            return mcp_response.content
                        else:
                            # Fallback to generic translation
                            mcp_response = openai_to_mcp.translate_response(openai_response)
                            return mcp_response.content
                    except Exception as e:
                        logger.error(f"Error invoking tool {tool_id}: {e!s}")
                        # Using custom exception class to fix TRY003
                        raise ToolInvocationError() from e
    
                return tool_handler
    
            # Create and register the tool handler
            create_tool_handler()
  • Helper function mapping 'code-execution' MCP tool ID to OpenAI 'code_interpreter' type.
    def map_tool_id_to_openai_type(tool_id: str) -> str:
        """
        Map MCP tool IDs to OpenAI tool types.
    
        Args:
            tool_id: MCP tool ID
    
        Returns:
            OpenAI tool type
        """
        mapping = {
            "web-search": "retrieval",
            "code-execution": "code_interpreter",
            "browser": "web_browser",
            "file-io": "file_search",
        }
    
        openai_type = mapping.get(tool_id, tool_id)
        logger.debug(f"Mapped MCP tool ID {tool_id} to OpenAI tool type {openai_type}")
    
        return openai_type
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool executes code and returns a result, but lacks critical details such as execution environment (e.g., sandbox, permissions), safety implications (e.g., destructive effects, rate limits), or output format. This is a significant gap for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with a single sentence 'Execute code and return the result', which is front-loaded and wastes no words. Every part of the sentence contributes to the core purpose, making it efficient in structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of code execution (a mutation tool with potential security implications), no annotations, no output schema, and incomplete parameter documentation, the description is insufficient. It should address execution context, safety, and result details to be complete enough for an AI agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter ('parameters') with 0% description coverage, so the schema provides no semantic information. The description adds no meaning beyond the schema, failing to explain what 'parameters' should contain (e.g., code string, language spec, arguments). For a tool with low schema coverage, this is inadequate compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Execute code and return the result' states a clear verb ('execute') and resource ('code'), but it's vague about what type of code (e.g., programming language, environment) and lacks differentiation from sibling tools like 'browser' or 'file-io' that might also involve execution. It's not tautological but misses specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'browser' for web-based execution or 'file-io' for file operations. The description implies a general-purpose code execution but offers no context, exclusions, or prerequisites, leaving the agent to guess based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/alohays/openai-tool2mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server