ask_openai
Query OpenAI GPT models to get AI-generated responses for prompts, supporting various model versions and temperature settings.
Instructions
Ask OpenAI GPT models a question
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The prompt to send to OpenAI | |
| model | No | The model to use (default: gpt-4o-mini) | gpt-4o-mini |
| temperature | No | Temperature for response generation (0-2) |
Implementation Reference
- src/index.js:190-226 (handler)The handler function that executes the 'ask_openai' tool logic. Validates inputs, calls OpenAI chat completions API, handles specific errors like rate limits, and formats the response as MCP content.async handleOpenAI(args) { if (!this.openai) { throw new ConfigurationError(ERROR_MESSAGES.OPENAI_NOT_CONFIGURED); } // Validate inputs const prompt = validatePrompt(args.prompt); const model = validateModel(args.model, 'OPENAI'); const temperature = validateTemperature(args.temperature, 'OPENAI'); try { if (process.env.NODE_ENV !== 'test') logger.debug(`OpenAI request - model: ${model}, temperature: ${temperature}`); const completion = await this.openai.chat.completions.create({ model: model, messages: [{ role: 'user', content: prompt }], temperature: temperature, }); return { content: [ { type: 'text', text: `🤖 OPENAI RESPONSE (${model}):\n\n${completion.choices[0].message.content}`, }, ], }; } catch (error) { if (error.status === 429) { throw new APIError('OpenAI rate limit exceeded. Please try again later.', 'OpenAI'); } else if (error.status === 401) { throw new ConfigurationError('Invalid OpenAI API key'); } else { throw new APIError(`OpenAI API error: ${error.message}`, 'OpenAI'); } } }
- src/index.js:93-115 (schema)Input schema for the 'ask_openai' tool, defining properties for prompt (required), model (with enum), and temperature (with bounds). Used for tool specification and validation.inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to OpenAI', }, model: { type: 'string', description: `The model to use (default: ${DEFAULTS.OPENAI.MODEL})`, enum: MODELS.OPENAI, default: DEFAULTS.OPENAI.MODEL, }, temperature: { type: 'number', description: `Temperature for response generation (${DEFAULTS.OPENAI.MIN_TEMPERATURE}-${DEFAULTS.OPENAI.MAX_TEMPERATURE})`, default: DEFAULTS.OPENAI.TEMPERATURE, minimum: DEFAULTS.OPENAI.MIN_TEMPERATURE, maximum: DEFAULTS.OPENAI.MAX_TEMPERATURE, }, }, required: ['prompt'], },
- src/index.js:90-116 (registration)Registers the 'ask_openai' tool in the available tools list (conditionally if OpenAI client is initialized), providing name, description, and input schema for MCP listTools.tools.push({ name: 'ask_openai', description: 'Ask OpenAI GPT models a question', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to OpenAI', }, model: { type: 'string', description: `The model to use (default: ${DEFAULTS.OPENAI.MODEL})`, enum: MODELS.OPENAI, default: DEFAULTS.OPENAI.MODEL, }, temperature: { type: 'number', description: `Temperature for response generation (${DEFAULTS.OPENAI.MIN_TEMPERATURE}-${DEFAULTS.OPENAI.MAX_TEMPERATURE})`, default: DEFAULTS.OPENAI.TEMPERATURE, minimum: DEFAULTS.OPENAI.MIN_TEMPERATURE, maximum: DEFAULTS.OPENAI.MAX_TEMPERATURE, }, }, required: ['prompt'], }, });
- src/index.js:174-175 (registration)Registers the dispatch for 'ask_openai' tool calls within the main CallToolRequestSchema handler switch statement.case 'ask_openai': return await this.handleOpenAI(args);