Skip to main content
Glama
alohays

openai-tool2mcp

by alohays

web-search

Search the web for real-time information to answer questions, find current data, or verify facts using this MCP server tool.

Instructions

Search the web for real-time information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
parametersYes

Implementation Reference

  • WebSearchAdapter implements the translation logic between MCP 'web-search' requests/responses and OpenAI web_search tool format. This is the core handler logic for the tool.
    class WebSearchAdapter(ToolAdapter):
        """Adapter for OpenAI's web search tool"""
    
        @property
        def tool_id(self) -> str:
            """Get the MCP tool ID"""
            return "web-search"
    
        @property
        def openai_tool_type(self) -> str:
            """Get the OpenAI tool type"""
            return "web_search"
    
        @property
        def description(self) -> str:
            """Get the tool description"""
            return "Search the web for real-time information"
    
        async def translate_request(self, request: MCPRequest) -> dict:
            """
            Translate MCP request to OpenAI parameters.
    
            Args:
                request (MCPRequest): MCP request
    
            Returns:
                dict: Dictionary of OpenAI parameters
            """
            parameters = {}
            if "search_term" in request.parameters:
                parameters["query"] = request.parameters["search_term"]
            elif "parameters" in request.parameters:
                # 경우에 따라 parameters 필드 내에 직접 쿼리가 문자열로 전달될 수 있음
                parameters["query"] = request.parameters["parameters"]
    
            # 향상된 로깅 추가
            logger.info(f"[WEB SEARCH] Translated MCP request to tool parameters: {parameters}")
    
            return parameters
    
        async def translate_response(self, response: Any) -> MCPResponse:
            """
            Translate OpenAI tool response to MCP response.
    
            Args:
                response (Any): OpenAI tool response (JSON string or dictionary)
    
            Returns:
                MCPResponse: MCP response
            """
            # 향상된 디버그 로깅 추가
            logger.debug(f"[WEB SEARCH] Translating response of type: {type(response)}")
    
            # 응답이 이미 JSON 문자열인지 확인
            if isinstance(response, str):
                try:
                    # 이미 JSON 문자열인 경우 파싱
                    logger.debug("[WEB SEARCH] Response is a string, attempting to parse as JSON")
                    parsed_response = json.loads(response)
    
                    # 응답 내용 확인 및 추출
                    logger.debug(
                        f"[WEB SEARCH] Parsed JSON with keys: {list(parsed_response.keys()) if isinstance(parsed_response, dict) else 'not a dict'}"
                    )
    
                    # 콘텐츠 추출 (문자열로 변환)
                    if isinstance(parsed_response, dict) and "content" in parsed_response:
                        content = str(parsed_response["content"])
                        logger.info(f"[WEB SEARCH] Extracted content from JSON response, length: {len(content)}")
                        return MCPResponse(content=content, context={"raw_response": response})
                    else:
                        # 콘텐츠 키가 없으면 전체 응답을 문자열로 변환
                        content_str = json.dumps(parsed_response)
                        logger.info(
                            f"[WEB SEARCH] No content key found, using full response as content, length: {len(content_str)}"
                        )
                        return MCPResponse(content=content_str, context={"raw_response": response})
    
                except json.JSONDecodeError:
                    # JSON 파싱 실패 시 원본 문자열 사용
                    logger.warning("[WEB SEARCH] Failed to parse response as JSON, using raw string")
                    return MCPResponse(content=response, context={"raw_response": response})
    
            # 딕셔너리인 경우
            elif isinstance(response, dict):
                logger.debug(f"[WEB SEARCH] Response is a dict with keys: {list(response.keys())}")
    
                # 콘텐츠 키가 있으면 해당 값 사용
                if "content" in response:
                    # 반드시 content를 문자열로 변환
                    content = str(response["content"])
                    logger.info(f"[WEB SEARCH] Extracted content from dict response, length: {len(content)}")
                    return MCPResponse(content=content, context={"raw_response": json.dumps(response)})
    
                # 콘텐츠 키가 없으면 전체 딕셔너리를 JSON 문자열로 변환
                content_str = json.dumps(response)
                logger.info(
                    f"[WEB SEARCH] No content key in dict, using full dict as JSON string, length: {len(content_str)}"
                )
                return MCPResponse(content=content_str, context={"raw_response": content_str})
    
            # 다른 모든 타입의 경우
            else:
                # 안전하게 문자열로 변환
                logger.warning(f"[WEB SEARCH] Unexpected response type: {type(response)}, converting to string")
                content_str = str(response)
                return MCPResponse(content=content_str, context={"raw_response": content_str})
  • Registers the 'web-search' MCP tool in the ToolRegistry, mapping it to OpenAI's 'web_search'.
    "web-search": {
        "openai_tool": OpenAIBuiltInTools.WEB_SEARCH.value,
        "enabled": OpenAIBuiltInTools.WEB_SEARCH.value in self.enabled_tools,
        "description": "Search the web for information",
    },
  • Instantiates WebSearchAdapter and maps it to 'web-search' tool_id in the server's tools_map.
    # Register default tool adapters
    adapters = [WebSearchAdapter(), CodeInterpreterAdapter(), BrowserAdapter(), FileManagerAdapter()]
    
    for adapter in adapters:
        # Only register if the tool is enabled
        if adapter.openai_tool_type in self.config.tools:
            tools_map[adapter.tool_id] = adapter
  • Registers all tools including 'web-search' with the FastMCP server using dynamic @mcp.tool decorators.
    """Register tools with the MCP SDK"""
    for tool_id, adapter in self.tools_map.items():
        # Define a tool handler for each adapter
        # Create a closure to properly capture the values
        def create_tool_handler(tool_id=tool_id, adapter=adapter):
            @self.mcp.tool(name=tool_id, description=adapter.description)
            async def tool_handler(**parameters):
                """
                MCP tool handler for OpenAI tools.
                """
                # Create an MCP request from the parameters
                mcp_request = MCPRequest(parameters=parameters)
    
                # Translate the request parameters using the adapter
                translated_params = await adapter.translate_request(mcp_request)
    
                # Create an OpenAI tool request
                openai_request = mcp_to_openai.translate_request(mcp_request, tool_id)
    
                # Override the parameters with the adapter-specific ones
                openai_request.parameters = translated_params
    
                try:
                    # Call OpenAI API to execute the tool
                    openai_response = await self.openai_client.invoke_tool(openai_request)
    
                    # Translate the OpenAI response to MCP format using the adapter
                    if openai_response.tool_outputs:
                        # Use the adapter to translate the tool-specific response
                        mcp_response = await adapter.translate_response(openai_response.tool_outputs[0].output)
    
                        # Add thread_id to context for state management
                        if mcp_response.context is None:
                            mcp_response.context = {}
                        mcp_response.context["thread_id"] = openai_response.thread_id
    
                        # Return the response content which will be used by MCP SDK
                        return mcp_response.content
                    else:
                        # Fallback to generic translation
                        mcp_response = openai_to_mcp.translate_response(openai_response)
                        return mcp_response.content
                except Exception as e:
                    logger.error(f"Error invoking tool {tool_id}: {e!s}")
                    # Using custom exception class to fix TRY003
                    raise ToolInvocationError() from e
    
            return tool_handler
    
        # Create and register the tool handler
        create_tool_handler()
  • Dynamic tool handler generated for each tool (including web-search) that orchestrates MCP request translation, OpenAI tool invocation, and response translation using the specific adapter.
    def create_tool_handler(tool_id=tool_id, adapter=adapter):
        @self.mcp.tool(name=tool_id, description=adapter.description)
        async def tool_handler(**parameters):
            """
            MCP tool handler for OpenAI tools.
            """
            # Create an MCP request from the parameters
            mcp_request = MCPRequest(parameters=parameters)
    
            # Translate the request parameters using the adapter
            translated_params = await adapter.translate_request(mcp_request)
    
            # Create an OpenAI tool request
            openai_request = mcp_to_openai.translate_request(mcp_request, tool_id)
    
            # Override the parameters with the adapter-specific ones
            openai_request.parameters = translated_params
    
            try:
                # Call OpenAI API to execute the tool
                openai_response = await self.openai_client.invoke_tool(openai_request)
    
                # Translate the OpenAI response to MCP format using the adapter
                if openai_response.tool_outputs:
                    # Use the adapter to translate the tool-specific response
                    mcp_response = await adapter.translate_response(openai_response.tool_outputs[0].output)
    
                    # Add thread_id to context for state management
                    if mcp_response.context is None:
                        mcp_response.context = {}
                    mcp_response.context["thread_id"] = openai_response.thread_id
    
                    # Return the response content which will be used by MCP SDK
                    return mcp_response.content
                else:
                    # Fallback to generic translation
                    mcp_response = openai_to_mcp.translate_response(openai_response)
                    return mcp_response.content
            except Exception as e:
                logger.error(f"Error invoking tool {tool_id}: {e!s}")
                # Using custom exception class to fix TRY003
                raise ToolInvocationError() from e
    
        return tool_handler
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'real-time information', hinting at dynamic data retrieval, but fails to describe critical traits like rate limits, authentication needs, or output format. For a tool with no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It is appropriately sized and front-loaded, clearly stating the tool's purpose without unnecessary elaboration. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (web search with real-time data), lack of annotations, no output schema, and low parameter coverage, the description is incomplete. It does not address how results are returned, error handling, or integration with sibling tools, leaving the agent with insufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain the single parameter 'parameters' beyond what the schema provides. It adds no meaning regarding what the parameter should contain (e.g., search query format) or how it influences the search. With low coverage and no compensatory details, the description falls short.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool's purpose as 'Search the web for real-time information', which includes a specific verb ('Search') and resource ('the web'). However, it lacks differentiation from sibling tools like 'browser' or 'code-execution', leaving the agent to infer distinctions. The purpose is clear but not specific enough to distinguish it from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings such as 'browser' or 'file-io'. It implies usage for web searches but does not specify contexts, exclusions, or alternatives. This leaves the agent with minimal direction for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/alohays/openai-tool2mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server