ask_gemini
Query Google Gemini AI models through the MCP AI Bridge to generate responses for prompts with configurable parameters.
Instructions
Ask Google Gemini AI a question
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The prompt to send to Gemini | |
| model | No | The model to use (default: gemini-1.5-flash-latest) | gemini-1.5-flash-latest |
| temperature | No | Temperature for response generation (0-1) |
Implementation Reference
- src/index.js:119-147 (registration)Registration of the 'ask_gemini' tool including its name, description, and input schema in the getAvailableTools() method.if (this.gemini) { tools.push({ name: 'ask_gemini', description: 'Ask Google Gemini AI a question', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini', }, model: { type: 'string', description: `The model to use (default: ${DEFAULTS.GEMINI.MODEL})`, enum: MODELS.GEMINI, default: DEFAULTS.GEMINI.MODEL, }, temperature: { type: 'number', description: `Temperature for response generation (${DEFAULTS.GEMINI.MIN_TEMPERATURE}-${DEFAULTS.GEMINI.MAX_TEMPERATURE})`, default: DEFAULTS.GEMINI.TEMPERATURE, minimum: DEFAULTS.GEMINI.MIN_TEMPERATURE, maximum: DEFAULTS.GEMINI.MAX_TEMPERATURE, }, }, required: ['prompt'], }, }); }
- src/index.js:228-269 (handler)The handler function that implements the core logic for the 'ask_gemini' tool: validates inputs, calls the Gemini API, formats the response, and handles errors.async handleGemini(args) { if (!this.gemini) { throw new ConfigurationError(ERROR_MESSAGES.GEMINI_NOT_CONFIGURED); } // Validate inputs const prompt = validatePrompt(args.prompt); const model = validateModel(args.model, 'GEMINI'); const temperature = validateTemperature(args.temperature, 'GEMINI'); try { if (process.env.NODE_ENV !== 'test') logger.debug(`Gemini request - model: ${model}, temperature: ${temperature}`); const geminiModel = this.gemini.getGenerativeModel({ model: model, generationConfig: { temperature: temperature, }, }); const result = await geminiModel.generateContent(prompt); const response = await result.response; const text = response.text(); return { content: [ { type: 'text', text: `🤖 GEMINI RESPONSE (${model}):\n\n${text}`, }, ], }; } catch (error) { if (error.message?.includes('quota')) { throw new APIError('Gemini quota exceeded. Please try again later.', 'Gemini'); } else if (error.message?.includes('API key')) { throw new ConfigurationError('Invalid Gemini API key'); } else { throw new APIError(`Gemini API error: ${error.message}`, 'Gemini'); } } }
- src/index.js:176-177 (handler)Dispatch case in the main request handler that routes 'ask_gemini' calls to the handleGemini method.case 'ask_gemini': return await this.handleGemini(args);