Skip to main content
Glama

fetch-chunk

Retrieves cached data chunks from partial responses in the Codex MCP Server, enabling sequential access to large code analysis results for AI-assisted programming tasks.

Instructions

Retrieves cached chunks from a changeMode response. Use this to get subsequent chunks after receiving a partial changeMode response.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cacheKeyYesThe cache key provided in the initial changeMode response
chunkIndexYesWhich chunk to retrieve (1-based index)

Implementation Reference

  • The main execute function that handles the tool invocation: parses args, fetches chunks from cache using getChunks, validates chunkIndex, formats the response with formatChangeModeResponse, adds summary if first chunk, and returns formatted output or error messages.
    execute: async (args: any, onProgress?: (newOutput: string) => void): Promise<string> => { const { cacheKey, chunkIndex } = args; Logger.toolInvocation('fetch-chunk', args); Logger.debug(`Fetching chunk ${chunkIndex} with cache key: ${cacheKey}`); // Retrieve cached chunks const chunks = getChunks(cacheKey); if (!chunks) { return `❌ Cache miss: No chunks found for cache key "${cacheKey}". Possible reasons: 1. The cache key is incorrect. Did you run ask-codex with changeMode enabled? 2. The cache has expired (10 minute TTL) 3. The MCP server was restarted and the file-based cache was cleared Please re-run the original changeMode request to regenerate the chunks.`; } // Validate chunk index if (chunkIndex < 1 || chunkIndex > chunks.length) { return `❌ Invalid chunk index: ${chunkIndex} Available chunks: 1 to ${chunks.length} You requested: ${chunkIndex} Please use a valid chunk index.`; } // Get the requested chunk const chunk = chunks[chunkIndex - 1]; // Format the response let result = formatChangeModeResponse(chunk.edits, { current: chunkIndex, total: chunks.length, cacheKey, }); // Add summary for first chunk if (chunkIndex === 1 && chunks.length > 1) { const allEdits = chunks.flatMap(c => c.edits); result = summarizeChangeModeEdits(allEdits, true) + '\n\n' + result; } Logger.debug( `Returning chunk ${chunkIndex} of ${chunks.length} with ${chunk.edits.length} edits` ); return result; },
  • Zod input schema defining the required parameters: cacheKey (string) and chunkIndex (number >=1).
    const inputSchema = z.object({ cacheKey: z.string().describe('The cache key provided in the initial changeMode response'), chunkIndex: z.number().min(1).describe('Which chunk to retrieve (1-based index)'), });
  • The fetchChunkTool is imported (line 8) and registered by pushing it to the toolRegistry array alongside other tools.
    toolRegistry.push( askCodexTool, batchCodexTool, // reviewCodexTool, pingTool, helpTool, versionTool, brainstormTool, fetchChunkTool, timeoutTestTool );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cexll/codex-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server