Skip to main content
Glama

ask-gemini

Analyze files and codebases using natural language queries. Supports model selection, sandbox testing, and structured edit suggestions for code changes.

Instructions

model selection [-m], sandbox [-s], and changeMode:boolean for providing edits

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesAnalysis request. Use @ syntax to include files (e.g., '@largefile.js explain what this does') or ask general questions
modelNoOptional model to use (e.g., 'gemini-2.5-flash'). If not specified, uses the default model (gemini-2.5-pro).
sandboxNoUse sandbox mode (-s flag) to safely test code changes, execute scripts, or run potentially risky operations in an isolated environment
changeModeNoEnable structured change mode - formats prompts to prevent tool errors and returns structured edit suggestions that Claude can apply directly
chunkIndexNoWhich chunk to return (1-based)
chunkCacheKeyNoOptional cache key for continuation

Implementation Reference

  • The main execute handler for the 'ask-gemini' tool. Handles arguments, calls executeGeminiCLI or cached processChangeModeOutput, and formats response based on changeMode.
    execute: async (args, onProgress) => { const { prompt, model, sandbox, changeMode, chunkIndex, chunkCacheKey } = args; if (!prompt?.trim()) { throw new Error(ERROR_MESSAGES.NO_PROMPT_PROVIDED); } if (changeMode && chunkIndex && chunkCacheKey) { return processChangeModeOutput( '', // empty for cache... chunkIndex as number, chunkCacheKey as string, prompt as string ); } const result = await executeGeminiCLI( prompt as string, model as string | undefined, !!sandbox, !!changeMode, onProgress ); if (changeMode) { return processChangeModeOutput( result, args.chunkIndex as number | undefined, undefined, prompt as string ); } return `${STATUS_MESSAGES.GEMINI_RESPONSE}\n${result}`; // changeMode false }
  • Zod schema for input validation of the 'ask-gemini' tool arguments.
    const askGeminiArgsSchema = z.object({ prompt: z.string().min(1).describe("Analysis request. Use @ syntax to include files (e.g., '@largefile.js explain what this does') or ask general questions"), model: z.string().optional().describe("Optional model to use (e.g., 'gemini-2.5-flash'). If not specified, uses the default model (gemini-2.5-pro)."), sandbox: z.boolean().default(false).describe("Use sandbox mode (-s flag) to safely test code changes, execute scripts, or run potentially risky operations in an isolated environment"), changeMode: z.boolean().default(false).describe("Enable structured change mode - formats prompts to prevent tool errors and returns structured edit suggestions that Claude can apply directly"), chunkIndex: z.union([z.number(), z.string()]).optional().describe("Which chunk to return (1-based)"), chunkCacheKey: z.string().optional().describe("Optional cache key for continuation"), });
  • Registration of the askGeminiTool (and other tools) into the toolRegistry.
    toolRegistry.push( askGeminiTool, pingTool, helpTool, brainstormTool, fetchChunkTool, timeoutTestTool );
  • Core helper that executes the Gemini CLI command, handles changeMode prompt formatting, command args, quota exceeded fallback to flash model.
    export async function executeGeminiCLI( prompt: string, model?: string, sandbox?: boolean, changeMode?: boolean, onProgress?: (newOutput: string) => void ): Promise<string> { let prompt_processed = prompt; if (changeMode) { prompt_processed = prompt.replace(/file:(\S+)/g, '@$1'); const changeModeInstructions = ` [CHANGEMODE INSTRUCTIONS] You are generating code modifications that will be processed by an automated system. The output format is critical because it enables programmatic application of changes without human intervention. INSTRUCTIONS: 1. Analyze each provided file thoroughly 2. Identify locations requiring changes based on the user request 3. For each change, output in the exact format specified 4. The OLD section must be EXACTLY what appears in the file (copy-paste exact match) 5. Provide complete, directly replacing code blocks 6. Verify line numbers are accurate CRITICAL REQUIREMENTS: 1. Output edits in the EXACT format specified below - no deviations 2. The OLD string MUST be findable with Ctrl+F - it must be a unique, exact match 3. Include enough surrounding lines to make the OLD string unique 4. If a string appears multiple times (like </div>), include enough context lines above and below to make it unique 5. Copy the OLD content EXACTLY as it appears - including all whitespace, indentation, line breaks 6. Never use partial lines - always include complete lines from start to finish OUTPUT FORMAT (follow exactly): **FILE: [filename]:[line_number]** \`\`\` OLD: [exact code to be replaced - must match file content precisely] NEW: [new code to insert - complete and functional] \`\`\` EXAMPLE 1 - Simple unique match: **FILE: src/utils/helper.js:100** \`\`\` OLD: function getMessage() { return "Hello World"; } NEW: function getMessage() { return "Hello Universe!"; } \`\`\` EXAMPLE 2 - Common tag needing context: **FILE: index.html:245** \`\`\` OLD: </div> </div> </section> NEW: </div> </footer> </section> \`\`\` IMPORTANT: The OLD section must be an EXACT copy from the file that can be found with Ctrl+F! USER REQUEST: ${prompt_processed} `; prompt_processed = changeModeInstructions; } const args = []; if (model) { args.push(CLI.FLAGS.MODEL, model); } if (sandbox) { args.push(CLI.FLAGS.SANDBOX); } // Ensure @ symbols work cross-platform by wrapping in quotes if needed const finalPrompt = prompt_processed.includes('@') && !prompt_processed.startsWith('"') ? `"${prompt_processed}"` : prompt_processed; args.push(CLI.FLAGS.PROMPT, finalPrompt); try { return await executeCommand(CLI.COMMANDS.GEMINI, args, onProgress); } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); if (errorMessage.includes(ERROR_MESSAGES.QUOTA_EXCEEDED) && model !== MODELS.FLASH) { Logger.warn(`${ERROR_MESSAGES.QUOTA_EXCEEDED}. Falling back to ${MODELS.FLASH}.`); await sendStatusMessage(STATUS_MESSAGES.FLASH_RETRY); const fallbackArgs = []; fallbackArgs.push(CLI.FLAGS.MODEL, MODELS.FLASH); if (sandbox) { fallbackArgs.push(CLI.FLAGS.SANDBOX); } // Same @ symbol handling for fallback const fallbackPrompt = prompt_processed.includes('@') && !prompt_processed.startsWith('"') ? `"${prompt_processed}"` : prompt_processed; fallbackArgs.push(CLI.FLAGS.PROMPT, fallbackPrompt); try { const result = await executeCommand(CLI.COMMANDS.GEMINI, fallbackArgs, onProgress); Logger.warn(`Successfully executed with ${MODELS.FLASH} fallback.`); await sendStatusMessage(STATUS_MESSAGES.FLASH_SUCCESS); return result; } catch (fallbackError) { const fallbackErrorMessage = fallbackError instanceof Error ? fallbackError.message : String(fallbackError); throw new Error(`${MODELS.PRO} quota exceeded, ${MODELS.FLASH} fallback also failed: ${fallbackErrorMessage}`); } } else { throw error; } } }
  • Core helper for processing changeMode outputs: parses OLD/NEW edits, validates, chunks large responses, caches chunks, formats for tool use.
    export async function processChangeModeOutput( rawResult: string, chunkIndex?: number, chunkCacheKey?: string, prompt?: string ): Promise<string> { // Check for cached chunks first if (chunkIndex && chunkCacheKey) { const cachedChunks = getChunks(chunkCacheKey); if (cachedChunks && chunkIndex > 0 && chunkIndex <= cachedChunks.length) { Logger.debug(`Using cached chunk ${chunkIndex} of ${cachedChunks.length}`); const chunk = cachedChunks[chunkIndex - 1]; let result = formatChangeModeResponse( chunk.edits, { current: chunkIndex, total: cachedChunks.length, cacheKey: chunkCacheKey } ); // Add summary for first chunk only if (chunkIndex === 1 && chunk.edits.length > 5) { const allEdits = cachedChunks.flatMap(c => c.edits); result = summarizeChangeModeEdits(allEdits) + '\n\n' + result; } return result; } Logger.debug(`Cache miss or invalid chunk index, processing new result`); } // Parse OLD/NEW format const edits = parseChangeModeOutput(rawResult); if (edits.length === 0) { return `No edits found in Gemini's response. Please ensure Gemini uses the OLD/NEW format. \n\n+ ${rawResult}`; } // Validate edits const validation = validateChangeModeEdits(edits); if (!validation.valid) { return `Edit validation failed:\n${validation.errors.join('\n')}`; } const chunks = chunkChangeModeEdits(edits); // Cache if multiple chunks and we have the original prompt let cacheKey: string | undefined; if (chunks.length > 1 && prompt) { cacheKey = cacheChunks(prompt, chunks); Logger.debug(`Cached ${chunks.length} chunks with key: ${cacheKey}`); } // Return requested chunk or first chunk const returnChunkIndex = (chunkIndex && chunkIndex > 0 && chunkIndex <= chunks.length) ? chunkIndex : 1; const returnChunk = chunks[returnChunkIndex - 1]; // Format the response let result = formatChangeModeResponse( returnChunk.edits, chunks.length > 1 ? { current: returnChunkIndex, total: chunks.length, cacheKey } : undefined ); // Add summary if helpful (only for first chunk) if (returnChunkIndex === 1 && edits.length > 5) { result = summarizeChangeModeEdits(edits, chunks.length > 1) + '\n\n' + result; } Logger.debug(`ChangeMode: Parsed ${edits.length} edits, ${chunks.length} chunks, returning chunk ${returnChunkIndex}`); return result; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jamubc/gemini-mcp-tool'

If you have feedback or need assistance with the MCP directory API, please join our Discord server