ask_openai
Query OpenAI GPT models using prompts and customizable parameters via MCP AI Bridge for AI-driven insights and responses.
Instructions
Ask OpenAI GPT models a question
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | The model to use (default: gpt-4o-mini) | gpt-4o-mini |
| prompt | Yes | The prompt to send to OpenAI | |
| temperature | No | Temperature for response generation (0-2) |
Implementation Reference
- src/index.js:190-226 (handler)The `handleOpenAI` function executes the 'ask_openai' tool. It validates inputs, calls the OpenAI Chat Completions API, formats the response, and handles errors like rate limits and invalid keys.async handleOpenAI(args) { if (!this.openai) { throw new ConfigurationError(ERROR_MESSAGES.OPENAI_NOT_CONFIGURED); } // Validate inputs const prompt = validatePrompt(args.prompt); const model = validateModel(args.model, 'OPENAI'); const temperature = validateTemperature(args.temperature, 'OPENAI'); try { if (process.env.NODE_ENV !== 'test') logger.debug(`OpenAI request - model: ${model}, temperature: ${temperature}`); const completion = await this.openai.chat.completions.create({ model: model, messages: [{ role: 'user', content: prompt }], temperature: temperature, }); return { content: [ { type: 'text', text: `🤖 OPENAI RESPONSE (${model}):\n\n${completion.choices[0].message.content}`, }, ], }; } catch (error) { if (error.status === 429) { throw new APIError('OpenAI rate limit exceeded. Please try again later.', 'OpenAI'); } else if (error.status === 401) { throw new ConfigurationError('Invalid OpenAI API key'); } else { throw new APIError(`OpenAI API error: ${error.message}`, 'OpenAI'); } } }
- src/index.js:93-115 (schema)Defines the input schema for the 'ask_openai' tool, including properties for prompt (required), model, and temperature with validation constraints.inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to OpenAI', }, model: { type: 'string', description: `The model to use (default: ${DEFAULTS.OPENAI.MODEL})`, enum: MODELS.OPENAI, default: DEFAULTS.OPENAI.MODEL, }, temperature: { type: 'number', description: `Temperature for response generation (${DEFAULTS.OPENAI.MIN_TEMPERATURE}-${DEFAULTS.OPENAI.MAX_TEMPERATURE})`, default: DEFAULTS.OPENAI.TEMPERATURE, minimum: DEFAULTS.OPENAI.MIN_TEMPERATURE, maximum: DEFAULTS.OPENAI.MAX_TEMPERATURE, }, }, required: ['prompt'], },
- src/index.js:89-117 (registration)Registers the 'ask_openai' tool in the list of available tools (conditionally if OpenAI client is initialized), including its name, description, and input schema.if (this.openai) { tools.push({ name: 'ask_openai', description: 'Ask OpenAI GPT models a question', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to OpenAI', }, model: { type: 'string', description: `The model to use (default: ${DEFAULTS.OPENAI.MODEL})`, enum: MODELS.OPENAI, default: DEFAULTS.OPENAI.MODEL, }, temperature: { type: 'number', description: `Temperature for response generation (${DEFAULTS.OPENAI.MIN_TEMPERATURE}-${DEFAULTS.OPENAI.MAX_TEMPERATURE})`, default: DEFAULTS.OPENAI.TEMPERATURE, minimum: DEFAULTS.OPENAI.MIN_TEMPERATURE, maximum: DEFAULTS.OPENAI.MAX_TEMPERATURE, }, }, required: ['prompt'], }, }); }
- src/index.js:173-176 (registration)Routes calls to the 'ask_openai' tool to the handleOpenAI method in the CallToolRequest handler.switch (name) { case 'ask_openai': return await this.handleOpenAI(args); case 'ask_gemini':