Skip to main content
Glama

MCP OpenAI Tools

A Model Context Protocol (MCP) server that provides access to OpenAI's advanced models (including o3) with web search, code interpreter, and combined analysis capabilities.

Features

  • Web Search: Search the web using OpenAI's integrated web search capability

  • Code Interpreter: Execute Python code in OpenAI's sandboxed environment

  • Search & Analyze: Combine web search with code analysis in a single operation

  • Direct Prompting: Send prompts directly to OpenAI models with optional web search

  • Health Check: Monitor server status and configuration

  • Configurable Reasoning: Adjust reasoning effort levels (low, medium, high) for optimal performance

Prerequisites

  • Python 3.11 or higher

  • OpenAI API key with access to o3 or other supported models

  • uv package manager (recommended) or pip

Installation

Option 1: Install from Source (Recommended for Development)

  1. Clone the repository:

git clone https://github.com/evandavid1/mcp-openai-tools.git cd mcp-openai-tools
  1. Create a virtual environment and install dependencies:

# Using uv (recommended) uv venv uv pip install -e . # Or using pip python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate pip install -e .

Option 2: Install as a Dependency in Your Project

# Using uv uv add /path/to/mcp-openai-tools # Or using pip pip install /path/to/mcp-openai-tools

Configuration

1. Set up your API Key

Create a .env file in your project root (not in the mcp-openai-tools directory):

OPENAI_API_KEY=your-api-key-here OPENAI_MODEL=o3 # Optional, defaults to gpt-5

Security Note: Never commit your .env file to version control. Add it to your .gitignore.

2. Configure MCP Server

Add the mcp-openai-tools server to your .mcp.json configuration:

For Local Development:

{ "mcpServers": { "mcp-openai-tools": { "type": "stdio", "command": "/path/to/mcp-openai-tools/.venv/bin/python", "args": ["-m", "mcp_openai_tools.main"], "cwd": "/path/to/your/project", "env": { "OPENAI_MODEL": "o3" } } } }

For Installed Package:

{ "mcpServers": { "mcp-openai-tools": { "type": "stdio", "command": "python", "args": ["-m", "mcp_openai_tools.main"], "cwd": "/path/to/your/project", "env": { "PYTHONPATH": "/path/to/mcp-openai-tools/src", "OPENAI_MODEL": "o3" } } } }

3. Environment Variable Options

The server supports multiple ways to specify the .env file location:

  • Default: Looks for .env in the current working directory

  • Parent Directories: Searches up to 3 parent directories

  • Custom Path: Set ENV_FILE environment variable to specify a custom path:

    "env": { "ENV_FILE": "/custom/path/to/.env", "OPENAI_MODEL": "o3" }

Usage

Available Tools

Once configured, the following tools are available in your MCP client (e.g., Claude Code):

  1. openai_web_search: Search the web for current information

    Parameters: - query: Search query string - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model override
  2. openai_code_interpreter: Execute Python code in a sandboxed environment

    Parameters: - instruction: What to do with the code - code: Optional Python code (generated if not provided) - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model override
  3. openai_search_and_analyze: Combine web search with code analysis

    Parameters: - task: Description of what to search and analyze - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model override
  4. openai_prompt: Direct prompting with optional web search

    Parameters: - text: Prompt text - reasoning_effort: "low" | "medium" | "high" (default: "medium") - model: Optional model override - include_web_search: Enable web search (default: true)
  5. openai_health_check: Check server status and configuration

    Parameters: None

Example Usage in Claude Code

# Web search example result = openai_web_search( query="latest developments in quantum computing 2024", reasoning_effort="high" ) # Code interpreter example result = openai_code_interpreter( instruction="Create a visualization of fibonacci sequence growth", code="import matplotlib.pyplot as plt\n# Generate fibonacci...", reasoning_effort="medium" ) # Combined search and analysis result = openai_search_and_analyze( task="Find current S&P 500 data and create a performance chart", reasoning_effort="high" )

Troubleshooting

Common Issues

  1. "OPENAI_API_KEY environment variable is not set"

    • Ensure your .env file exists and contains OPENAI_API_KEY=your-key

    • Check that the working directory (cwd) in .mcp.json points to your project directory

    • Try setting ENV_FILE environment variable to the absolute path of your .env file

  2. "No .env file found. Using system environment variables only"

    • This warning appears when no .env file is found but may still work if you've set environment variables in your system

    • Check the server logs to see which directories were searched

  3. Module not found errors

    • Ensure PYTHONPATH is set correctly in .mcp.json if using the package from another location

    • Verify the virtual environment is activated if running locally

  4. API errors or model access issues

    • Verify your API key has access to the specified model (o3, gpt-5, etc.)

    • Check OpenAI API status and your account limits

Debugging

Enable detailed logging by checking the server output. The server logs:

  • Where it's looking for .env files

  • Which .env file was loaded (if any)

  • API configuration status

  • Tool execution details

Development

Running Tests

# Using pytest pytest tests/ # With coverage pytest tests/ --cov=mcp_openai_tools

Project Structure

mcp-openai-tools/ ├── src/ │ └── mcp_openai_tools/ │ ├── __init__.py │ ├── main.py # Entry point │ ├── server.py # MCP server setup │ ├── core/ │ │ ├── config.py # Configuration management │ │ └── client.py # OpenAI client setup │ ├── tools/ │ │ ├── web_search.py │ │ ├── code_interpreter.py │ │ ├── search_analyze.py │ │ ├── prompt.py │ │ └── health_check.py │ └── prompts/ │ └── user_prompts.py ├── tests/ ├── .env.example # Example environment file ├── .mcp.json.example # MCP configuration example ├── pyproject.toml # Package configuration └── README.md # This file

Contributing

Contributions are welcome! Please:

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes with tests

  4. Submit a pull request

License

MIT License - see LICENSE file for details

Support

For issues, questions, or suggestions:

  • Open an issue on GitHub

  • Check existing issues for solutions

  • Review the server logs for debugging information

-
security - not tested
-
license - not tested
-
quality - not tested

Related MCP Servers

  • A
    security
    -
    license
    A
    quality
    Provides integration with OpenRouter.ai, allowing access to various AI models through a unified interface.
    Last updated -
    58
    58
    Apache 2.0
  • -
    security
    -
    license
    -
    quality
    Provides browser automation capabilities through an API endpoint that interprets natural language commands to perform web tasks using OpenAI's GPT models.
    Last updated -
  • A
    security
    -
    license
    A
    quality
    One click installation & Configuration,access to OpenAI's websearch functionality through the Model Context Protocol。
    Last updated -
    1
    65
    MIT License
  • -
    security
    -
    license
    -
    quality
    Gives Claude access to multiple AI models (Gemini, OpenAI, OpenRouter, Ollama) for enhanced development capabilities including extended reasoning, collaborative development, code review, and advanced debugging.
    Last updated -
    • Apple
    • Linux

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/evandavid1/mcp-openai-tools'

If you have feedback or need assistance with the MCP directory API, please join our Discord server