Skip to main content
Glama

gemini_chat

Chat with Gemini AI using text and up to 10 reference images to maintain multi-turn conversations with visual context.

Instructions

Chat with Gemini 3.1 Flash model. Supports multi-turn conversations with up to 10 reference images.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageYesThe message to send to Gemini
imagesNoArray of image paths to include in the chat (max 10). Supports file paths, 'last', or 'history:N' references.
conversation_idNoOptional conversation ID for maintaining context and accessing image history
system_promptNoOptional system prompt to guide the model's behavior

Implementation Reference

  • The logic that executes the gemini_chat tool, handling message processing, history, image attachments, and calling the Gemini API.
    case "gemini_chat": {
      const { message, conversation_id = "default", system_prompt, images = [] } = args as any;
    
      const context = getOrCreateContext(conversation_id);
      const effectiveModel = context.selectedModel ?? IMAGE_MODEL;
      const model = genAI.getGenerativeModel({
        model: effectiveModel,
        systemInstruction: system_prompt,
      });
    
      // Build message parts with images (max 10)
      const messageParts: Part[] = [{ text: message }];
      const imageRefs = (images as string[]).slice(0, 10);
      const failedImages: Array<{ path: string; reason: string }> = [];
    
      for (const imgRef of imageRefs) {
        try {
          // Check for history reference
          const historyImage = getImageFromHistory(context, imgRef);
          if (historyImage) {
            messageParts.push({
              inlineData: {
                mimeType: historyImage.mimeType,
                data: historyImage.base64Data,
              },
            });
          } else {
            // File path
            let resolvedPath = imgRef;
            if (!path.isAbsolute(resolvedPath)) {
              resolvedPath = path.join(process.cwd(), resolvedPath);
            }
            // Try alternative path if not found
            try {
              await fs.access(resolvedPath);
            } catch {
              const homeDir = os.homedir();
              const altPath = path.join(homeDir, 'Documents', 'nanobanana_generated', path.basename(imgRef));
              await fs.access(altPath);
              resolvedPath = altPath;
            }
            const base64 = await imageToBase64(resolvedPath);
            messageParts.push({
              inlineData: {
                mimeType: "image/png",
                data: base64,
              },
            });
          }
        } catch (error) {
          failedImages.push({
            path: imgRef,
            reason: error instanceof Error ? error.message : String(error),
          });
        }
      }
    
      // Add user message to history
      context.history.push({
        role: "user",
        parts: messageParts,
      });
    
      // Start chat with history
      const chat = model.startChat({
        history: context.history.slice(0, -1), // All except the last message
      });
    
      const result = await chat.sendMessage(messageParts);
      const response = result.response;
      const text = response.text();
    
      // Add model response to history
      context.history.push({
        role: "model",
        parts: [{ text }],
      });
    
      const imageCount = messageParts.length - 1;
      let responseText = imageCount > 0
        ? `[${imageCount} image(s) included]\n\n${text}`
        : text;
    
      if (failedImages.length > 0) {
        responseText += `\n\nWarning: ${failedImages.length} image(s) could not be loaded:\n`;
        responseText += failedImages.map(f => `  - ${f.path}: ${f.reason}`).join('\n');
      }
    
      return {
        content: [{ type: "text", text: responseText }],
      };
    }
  • src/index.ts:284-303 (registration)
    The registration of the gemini_chat tool in the ListToolsRequestSchema handler.
    {
      name: "gemini_chat",
      description: "Chat with Gemini 3.1 Flash model. Supports multi-turn conversations with up to 10 reference images.",
      inputSchema: {
        type: "object",
        properties: {
          message: {
            type: "string",
            description: "The message to send to Gemini",
          },
          images: {
            type: "array",
            items: { type: "string" },
            description: "Array of image paths to include in the chat (max 10). Supports file paths, 'last', or 'history:N' references.",
            maxItems: 10,
          },
          conversation_id: {
            type: "string",
            description: "Optional conversation ID for maintaining context and accessing image history",
          },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pistachiomatt/nanobanana-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server