Provides access to OpenAI's advanced models (including o3) with web search capabilities, code interpreter for executing Python code in sandboxed environments, and combined search and analysis tools.
MCP OpenAI Tools
A Model Context Protocol (MCP) server that provides access to OpenAI's advanced models (including o3) with web search, code interpreter, and combined analysis capabilities.
Features
Web Search: Search the web using OpenAI's integrated web search capability
Code Interpreter: Execute Python code in OpenAI's sandboxed environment
Search & Analyze: Combine web search with code analysis in a single operation
Direct Prompting: Send prompts directly to OpenAI models with optional web search
Health Check: Monitor server status and configuration
Configurable Reasoning: Adjust reasoning effort levels (low, medium, high) for optimal performance
Prerequisites
Python 3.11 or higher
OpenAI API key with access to o3 or other supported models
uv
package manager (recommended) orpip
Installation
Option 1: Install from Source (Recommended for Development)
Clone the repository:
Create a virtual environment and install dependencies:
Option 2: Install as a Dependency in Your Project
Configuration
1. Set up your API Key
Create a .env
file in your project root (not in the mcp-openai-tools directory):
Security Note: Never commit your .env
file to version control. Add it to your .gitignore
.
2. Configure MCP Server
Add the mcp-openai-tools server to your .mcp.json
configuration:
For Local Development:
For Installed Package:
3. Environment Variable Options
The server supports multiple ways to specify the .env
file location:
Default: Looks for
.env
in the current working directoryParent Directories: Searches up to 3 parent directories
Custom Path: Set
ENV_FILE
environment variable to specify a custom path:"env": { "ENV_FILE": "/custom/path/to/.env", "OPENAI_MODEL": "o3" }
Usage
Available Tools
Once configured, the following tools are available in your MCP client (e.g., Claude Code):
openai_web_search
: Search the web for current informationParameters: - query: Search query string - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model overrideopenai_code_interpreter
: Execute Python code in a sandboxed environmentParameters: - instruction: What to do with the code - code: Optional Python code (generated if not provided) - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model overrideopenai_search_and_analyze
: Combine web search with code analysisParameters: - task: Description of what to search and analyze - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model overrideopenai_prompt
: Direct prompting with optional web searchParameters: - text: Prompt text - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model override - include_web_search: Enable web search (default: true)openai_health_check
: Check server status and configurationParameters: None
Example Usage in Claude Code
Troubleshooting
Common Issues
"OPENAI_API_KEY environment variable is not set"
Ensure your
.env
file exists and containsOPENAI_API_KEY=your-key
Check that the working directory (
cwd
) in.mcp.json
points to your project directoryTry setting
ENV_FILE
environment variable to the absolute path of your.env
file
"No .env file found. Using system environment variables only"
This warning appears when no
.env
file is found but may still work if you've set environment variables in your systemCheck the server logs to see which directories were searched
Module not found errors
Ensure
PYTHONPATH
is set correctly in.mcp.json
if using the package from another locationVerify the virtual environment is activated if running locally
API errors or model access issues
Verify your API key has access to the specified model (o3, gpt-5, etc.)
Check OpenAI API status and your account limits
Debugging
Enable detailed logging by checking the server output. The server logs:
Where it's looking for
.env
filesWhich
.env
file was loaded (if any)API configuration status
Tool execution details
Development
Running Tests
Project Structure
Contributing
Contributions are welcome! Please:
Fork the repository
Create a feature branch
Make your changes with tests
Submit a pull request
License
MIT License - see LICENSE file for details
Support
For issues, questions, or suggestions:
Open an issue on GitHub
Check existing issues for solutions
Review the server logs for debugging information
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Provides access to OpenAI's advanced models (including o3) with integrated web search, Python code interpreter, and combined analysis capabilities. Enables users to perform web searches, execute code in sandboxed environments, and combine search with analysis through natural language.
Related MCP Servers
- AsecurityAlicenseAqualityProvides integration with OpenRouter.ai, allowing access to various AI models through a unified interface.Last updated -5861Apache 2.0
- -securityFlicense-qualityProvides browser automation capabilities through an API endpoint that interprets natural language commands to perform web tasks using OpenAI's GPT models.Last updated -
- AsecurityAlicenseAqualityOne click installation & Configuration,access to OpenAI's websearch functionality through the Model Context Protocol。Last updated -165MIT License
- -securityFlicense-qualityGives Claude access to multiple AI models (Gemini, OpenAI, OpenRouter, Ollama) for enhanced development capabilities including extended reasoning, collaborative development, code review, and advanced debugging.Last updated -