Provides GPT-5 inference capabilities through OpenAI's API, including support for reasoning effort levels, verbosity controls, web search preview integration, and customizable system prompts and parameters.
GPT-5 MCP Server (TypeScript)
An MCP server that exposes a gpt5_query tool for GPT-5 inference via OpenAI Responses API, with optional Web Search Preview. Supports per-call overrides for verbosity, reasoning effort, and other parameters.
Features
TypeScript MCP server using
@modelcontextprotocol/sdkgpt5_querytoolweb_search_previewintegration (optional)verbosity(low|medium|high)reasoning.effort(low|medium|high)tool_choice(auto|none),parallel_tool_callssystemprompt,model,max_output_tokens
Config via environment variables with per-call overrides
Quick Start
Install dependencies
Configure environment
Create .env (or export env vars):
Build and run
For development (watch mode):
Using with MCP Clients (Claude Code, Claude Desktop)
This server speaks Model Context Protocol (MCP) over stdio and emits pure JSON to stdout, making it safe for Claude Code and Claude Desktop.
Prerequisites
Node.js 18+
OpenAI API key via
.envor environment variable
Build
Run directly (recommended)
Command:
nodeArgs:
dist/cli.jsCWD: repository root (required if you want
.envto be loaded)
Example:
Add to Claude Code (VS Code)
Command Palette → "Claude: Manage MCP Servers"
"Add server" with:
Name:
gpt5-mcpCommand:
node(or absolute path, e.g.,/opt/homebrew/bin/node)Args: ["/absolute/path/to/gpt5-mcp-server/dist/cli.js"] (or just
gpt5-mcp-serverif installed globally)Env (choose one):
Option A (ENV_FILE):
ENV_FILE=/absolute/path/to/gpt5-mcp-server/.envOption B (explicit): set
OPENAI_API_KEY,OPENAI_MODEL,OPENAI_TIMEOUT_MS,DEFAULT_VERBOSITY,REASONING_EFFORT,WEB_SEARCH_DEFAULT_ENABLED,WEB_SEARCH_CONTEXT_SIZE
Add to Claude Desktop Edit config (e.g., macOS:
~/Library/Application Support/Claude/claude_desktop_config.json) and add:
Option A: using ENV_FILE
Option B: explicit env vars
CLI usage
Package exposes bin(s).
Local link:
npm link→ rungpt5-mcp-serverGlobal (after publish):
npm i -g gpt5-mcp-server→gpt5-mcp-serverDirect:
node /absolute/path/to/gpt5-mcp-server/dist/cli.js
Web Search notes
Due to OpenAI constraints,
web_search_previewcannot be combined withreasoning.effort = minimal.This server automatically bumps effort to
mediumifweb_search.enabled = true.If you need strict
minimal, setweb_search.enabled = false.
Troubleshooting
JSON parse error (Unexpected token ...)
Likely extra logs on stdio. Use
node dist/cli.js, avoidnpx.
Auth error
Ensure
OPENAI_API_KEYis provided.
Timeout
Increase
OPENAI_TIMEOUT_MS(e.g., 120000).
400 with Web Search
Caused by
minimaleffort + web search. It's auto-bumped tomedium; alternatively setreasoning_effort=mediumor disableweb_search.
Tool: gpt5_query
Input schema (JSON):
Example call (Inspector or client):
Defaults and behavior
model: defaults to
OPENAI_MODEL(env). Example:gpt-5.system: optional. Sent as
instructions.reasoning_effort: accepts
low|minimal|medium|high. Internallylow→minimal。Constraint: when
web_search.enabled=trueand effort isminimal, it is auto-bumped tomediumto satisfy OpenAI constraints.
verbosity: defaults to
DEFAULT_VERBOSITY(env). Sent astext.verbosity.tool_choice: default
auto.parallel_tool_calls: default
true.max_output_tokens: optional; omitted when not set.
web_search.enabled: defaults to
WEB_SEARCH_DEFAULT_ENABLED(env).web_search.search_context_size: defaults to
WEB_SEARCH_CONTEXT_SIZE(env). Allowed:low|medium|high.
Environment variable mapping
OPENAI_API_KEY(required)OPENAI_MODEL→ model defaultOPENAI_MAX_RETRIES→ OpenAI clientOPENAI_TIMEOUT_MS→ OpenAI clientREASONING_EFFORT→ reasoning_effort default (low|minimal|medium|high)DEFAULT_VERBOSITY→ verbosity default (low|medium|high)WEB_SEARCH_DEFAULT_ENABLED→ web_search.enabled default (true|false)WEB_SEARCH_CONTEXT_SIZE→ web_search.search_context_size default (low|medium|high)
Output shape
On success:
content: [{ type: "text", text: string }]On error:
isError: trueand atextitem withError: ...
Notes
If the selected model does not support certain fields (e.g.,
verbosity), they are ignored.Keep API keys out of logs. Ensure
.envis not committed.
License
MIT
日本語 (Japanese)
ツール: gpt5_query
入力スキーマ (JSON):
例 (Inspector など):
既定値と挙動
model: 既定は
OPENAI_MODEL(環境変数)。例:gpt-5。system: 任意。OpenAI には
instructionsとして送信します。reasoning_effort:
low|minimal|medium|highを受け付け、内部的にlowはminimalとして扱われます。制約:
web_search.enabled=trueかつ effort=minimalの場合、OpenAI の制約に合わせて自動的にmediumに引き上げます。
verbosity: 既定は
DEFAULT_VERBOSITY(環境変数)。OpenAI にはtext.verbosityとして送信します。tool_choice: 既定は
auto。parallel_tool_calls: 既定は
true。max_output_tokens: 任意。未指定の場合は送信しません。
web_search.enabled: 既定は
WEB_SEARCH_DEFAULT_ENABLED(環境変数)。web_search.search_context_size: 既定は
WEB_SEARCH_CONTEXT_SIZE(環境変数)。許容値:low|medium|high。
環境変数マッピング
OPENAI_API_KEY(必須)OPENAI_MODEL→ model 既定OPENAI_MAX_RETRIES→ OpenAI クライアント設定OPENAI_TIMEOUT_MS→ OpenAI クライアント設定REASONING_EFFORT→ reasoning_effort 既定(low|minimal|medium|high)DEFAULT_VERBOSITY→ verbosity 既定(low|medium|high)WEB_SEARCH_DEFAULT_ENABLED→ web_search.enabled 既定(true|false)WEB_SEARCH_CONTEXT_SIZE→ web_search.search_context_size 既定(low|medium|high)
出力形式
成功時:
content: [{ type: "text", text: string }]エラー時:
isError: trueとtextにError: ...
注意
選択したモデルが特定のフィールド(例:
verbosity)をサポートしない場合、それらは無視されます。API キーはログに出力しません。
.envはコミットしないでください。
MCP Serverの使い方
このサーバーは Model Context Protocol (MCP) の標準入出力(stdio)で動作します。純粋な JSON のみを stdout に出力する設計のため、MCP Inspector / Claude Code / Claude Desktop で安全に接続できます。
前提
Node.js 18+
OpenAI APIキーが
.envもしくは環境変数で設定されていること
ビルド
直接起動(推奨)
コマンド:
node引数:
dist/cli.jsCWD: リポジトリのルート(
.envを読む場合は必須)
例:
Claude Code(VS Code 拡張)に追加
VS Code のコマンドパレット → 「Claude: Manage MCP Servers」
「Add server」で次を入力:
Name:
gpt5-mcpCommand:
node(絶対パス可)Args:
["/絶対/パス/gpt5-mcp-server/dist/cli.js"](グローバル導入済みなら不要)Env(どちらか一方):
オプションA(ENV_FILE):
ENV_FILE=/絶対/パス/gpt5-mcp-server/.envオプションB(明示指定):
OPENAI_API_KEY、OPENAI_MODEL、OPENAI_TIMEOUT_MS、DEFAULT_VERBOSITY、REASONING_EFFORT、WEB_SEARCH_DEFAULT_ENABLED、WEB_SEARCH_CONTEXT_SIZE
Claude Desktop に追加 設定ファイル(例: macOS は
~/Library/Application Support/Claude/claude_desktop_config.json)を編集して以下を追記します。
オプションA: ENV_FILE を使う
オプションB: 環境変数を明示指定
CLI の利用
パッケージには bin が含まれます。
ローカルリンク:
npm link後にgpt5-mcp-serverグローバル(公開後):
npm i -g gpt5-mcp-server→gpt5-mcp-server直接実行:
node /絶対/パス/gpt5-mcp-server/dist/cli.js
Web Search に関する注意
OpenAI の制約により、
web_search_previewはreasoning.effort = minimalと併用できません。本サーバーは
web_search.enabled = trueの場合、自動的に effort をmediumに引き上げて呼び出します。もし
minimalを厳格に使いたい場合は、web_search.enabled = falseにしてください。
トラブルシューティング
JSON パースエラー(Unexpected token ...)
stdio に余計な出力が混ざっている可能性があります。
node dist/cli.jsを使い、npxは避けてください。.env読み込みやライブラリのログは既に抑止済みです。
認証エラー
OPENAI_API_KEYが正しく渡っているか確認。
タイムアウト
OPENAI_TIMEOUT_MSを増やす(例: 120000)。
Web Search で 400 エラー
reasoning.effort=minimalとweb_searchの併用不可が原因。自動的にmediumに上げますが、明示的にmediumを指定するか、web_searchを無効化してください。
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables GPT-5 inference via OpenAI API with configurable reasoning effort, verbosity levels, and optional web search integration. Supports per-call parameter overrides for flexible AI model interactions through MCP.
Related MCP Servers
- Asecurity-licenseAqualityProvides a standardized way to integrate Perplexity AI's features like chat, search, and documentation access into MCP-based systems.Last updated -1
- -security-license-qualityProvides browser automation capabilities through an API endpoint that interprets natural language commands to perform web tasks using OpenAI's GPT models.Last updated -
- -security-license-qualityProvides SEO automation with tools for keyword research, SERP analysis, and competitor analysis through Google Ads API integration, enabling AI assistants to access these capabilities via MCP.Last updated -29
- -security-license-qualityProvides tools for generating and editing images using OpenAI's gpt-image-1 model via an MCP interface, enabling AI assistants to create and modify images based on text prompts.Last updated -16Apache 2.0