Skip to main content
Glama

ask_gemini

Query Google Gemini AI models through the MCP AI Bridge to generate responses for prompts with configurable parameters.

Instructions

Ask Google Gemini AI a question

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe prompt to send to Gemini
modelNoThe model to use (default: gemini-1.5-flash-latest)gemini-1.5-flash-latest
temperatureNoTemperature for response generation (0-1)

Implementation Reference

  • src/index.js:119-147 (registration)
    Registration of the 'ask_gemini' tool including its name, description, and input schema in the getAvailableTools() method.
    if (this.gemini) { tools.push({ name: 'ask_gemini', description: 'Ask Google Gemini AI a question', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini', }, model: { type: 'string', description: `The model to use (default: ${DEFAULTS.GEMINI.MODEL})`, enum: MODELS.GEMINI, default: DEFAULTS.GEMINI.MODEL, }, temperature: { type: 'number', description: `Temperature for response generation (${DEFAULTS.GEMINI.MIN_TEMPERATURE}-${DEFAULTS.GEMINI.MAX_TEMPERATURE})`, default: DEFAULTS.GEMINI.TEMPERATURE, minimum: DEFAULTS.GEMINI.MIN_TEMPERATURE, maximum: DEFAULTS.GEMINI.MAX_TEMPERATURE, }, }, required: ['prompt'], }, }); }
  • The handler function that implements the core logic for the 'ask_gemini' tool: validates inputs, calls the Gemini API, formats the response, and handles errors.
    async handleGemini(args) { if (!this.gemini) { throw new ConfigurationError(ERROR_MESSAGES.GEMINI_NOT_CONFIGURED); } // Validate inputs const prompt = validatePrompt(args.prompt); const model = validateModel(args.model, 'GEMINI'); const temperature = validateTemperature(args.temperature, 'GEMINI'); try { if (process.env.NODE_ENV !== 'test') logger.debug(`Gemini request - model: ${model}, temperature: ${temperature}`); const geminiModel = this.gemini.getGenerativeModel({ model: model, generationConfig: { temperature: temperature, }, }); const result = await geminiModel.generateContent(prompt); const response = await result.response; const text = response.text(); return { content: [ { type: 'text', text: `🤖 GEMINI RESPONSE (${model}):\n\n${text}`, }, ], }; } catch (error) { if (error.message?.includes('quota')) { throw new APIError('Gemini quota exceeded. Please try again later.', 'Gemini'); } else if (error.message?.includes('API key')) { throw new ConfigurationError('Invalid Gemini API key'); } else { throw new APIError(`Gemini API error: ${error.message}`, 'Gemini'); } } }
  • Dispatch case in the main request handler that routes 'ask_gemini' calls to the handleGemini method.
    case 'ask_gemini': return await this.handleGemini(args);

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fakoli/mcp-ai-bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server