Skip to main content
Glama
nbrain-team

GPT-5 MCP Server

by nbrain-team

gpt5_query

Query GPT-5 AI models with configurable reasoning effort, verbosity levels, and optional web search integration for precise answers.

Instructions

Query GPT-5 with optional Web Search Preview. Supports verbosity and reasoning effort.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
inputYes

Implementation Reference

  • src/index.ts:39-59 (registration)
    Registers the gpt5_query tool on the MCP server with input schema and handler function that delegates to runQuery in openai.ts.
    server.tool( "gpt5_query", "Query GPT-5 with optional Web Search Preview. Supports verbosity and reasoning effort.", { input: QueryInputSchema }, async ({ input }) => { const parsed = QueryInputSchema.parse(input); try { const text = await runQuery(openai, parsed as QueryInput, config); return { content: [{ type: "text" as const, text: text || "No response text available." }], }; } catch (error) { console.error("Error calling OpenAI API:", error); const message = error instanceof Error ? error.message : "Unknown error"; return { content: [{ type: "text" as const, text: `Error: ${message}` }], isError: true, }; } } );
  • Zod input schema for the gpt5_query tool defining all parameters like query, model overrides, web_search options etc.
    const QueryInputSchema = z.object({ query: z.string().describe("User question or instruction"), // Per-call overrides model: z.string().optional().describe("Model name, e.g. gpt-5"), system: z.string().optional().describe("Optional system prompt/instructions for the model"), reasoning_effort: z.enum(["low", "minimal", "medium", "high"]).optional(), verbosity: z.enum(["low", "medium", "high"]).optional(), tool_choice: z.enum(["auto", "none"]).optional(), parallel_tool_calls: z.boolean().optional(), max_output_tokens: z.number().int().positive().optional(), web_search: z .object({ enabled: z.boolean().optional(), search_context_size: z.enum(["low", "medium", "high"]).optional(), }) .optional(), });
  • Core handler logic: builds OpenAI request and calls the API, extracts output text. Called directly by the registered tool handler.
    export async function runQuery(openai: OpenAI, input: QueryInput, cfg: AppConfig) { const req = buildOpenAIRequest(input, cfg); const response: unknown = await openai.responses.create( req as unknown as Record<string, unknown> ); const text = extractOutputText(response) ?? ""; return text || "No response text available."; }
  • Helper function to build the structured OpenAI request from tool input and config, handling defaults, web search tool addition, reasoning adjustments etc.
    export function buildOpenAIRequest(input: QueryInput, cfg: AppConfig): OpenAIRequest { const model = input.model ?? cfg.model; const effRaw = (input.reasoning_effort ?? cfg.reasoningEffort) as | "low" | ReasoningEffort | undefined; let reasoningEffort: ReasoningEffort | undefined = effRaw ? ((effRaw === "low" ? "minimal" : effRaw) as ReasoningEffort) : undefined; // Bump reasoning for web search minimal constraint const webEnabled = input.web_search?.enabled ?? cfg.webSearchDefaultEnabled; if (reasoningEffort === "minimal" && webEnabled) { reasoningEffort = "medium"; } const verbosity: Verbosity | undefined = input.verbosity ?? cfg.defaultVerbosity; const searchContextSize: SearchContextSize | undefined = input.web_search?.search_context_size ?? cfg.webSearchContextSize; const toolChoice = input.tool_choice ?? "auto"; const parallelToolCalls = input.parallel_tool_calls ?? true; const tools: WebSearchPreviewTool[] = []; if (webEnabled) { const webTool: WebSearchPreviewTool = { type: "web_search_preview" }; if (searchContextSize) { webTool.search_context_size = searchContextSize; } tools.push(webTool); } const req: OpenAIRequest = { model, input: input.query, tool_choice: toolChoice, parallel_tool_calls: parallelToolCalls, } as OpenAIRequest; if (input.system) req.instructions = input.system; if (tools.length > 0) req.tools = tools; if (reasoningEffort) req.reasoning = { effort: reasoningEffort }; if (verbosity) req.text = { verbosity }; if (input.max_output_tokens) req.max_output_tokens = input.max_output_tokens; return req; }
  • TypeScript type matching the tool's input schema, used internally.
    export type QueryInput = { query: string; model?: string; system?: string; reasoning_effort?: "low" | "minimal" | "medium" | "high"; verbosity?: Verbosity; tool_choice?: "auto" | "none"; parallel_tool_calls?: boolean; max_output_tokens?: number; web_search?: { enabled?: boolean; search_context_size?: SearchContextSize; }; };

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nbrain-team/gpt5-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server