get_next_chunk
Retrieves the next subtitle chunk for sequential translation processing after conversation detection, enabling chunk-by-chunk handling of large SRT files.
Instructions
š¦ CHUNK RETRIEVAL FOR TRANSLATION WORKFLOW š¦
šÆ PURPOSE: Retrieves the next chunk from memory for sequential processing. Use this after detect_conversations with storeInMemory=true.
š HOW IT WORKS:
Automatically tracks which chunk to return next
Returns actual chunk data with subtitle text content
Advances to next chunk automatically
Returns null when all chunks processed
š„ PARAMETERS:
sessionId: Session ID from detect_conversations response
š¤ RETURNS:
chunk: Complete chunk data with subtitle text (or null if done)
chunkIndex: Current chunk number (0-based)
totalChunks: Total chunks available
hasMore: Boolean indicating if more chunks exist
message: Status message
š” USAGE PATTERN:
Call detect_conversations with storeInMemory=true
Get sessionId from response
Call get_next_chunk repeatedly until hasMore=false
Process each chunk for translation
Use translate_srt() on individual chunks
š EXAMPLE: {"sessionId": "srt-session-123456789"}
ā ļø NOTE:
Each call advances to the next chunk automatically
Store sessionId from detect_conversations response
Use this for chunk-by-chunk processing of large files
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| sessionId | Yes | Session ID from detect_conversations with storeInMemory=true |
Implementation Reference
- src/mcp/server.ts:618-667 (handler)The core handler function that implements the get_next_chunk MCP tool. It retrieves the next SRT chunk from session-specific memory, returns chunk data or null if done, updates the current index, and provides status information.private async handleGetNextChunk(args: any) { const { sessionId } = args; if (!this.chunkMemory.has(sessionId)) { throw new Error(`Session ${sessionId} not found in memory`); } const chunks = this.chunkMemory.get(sessionId); const currentIndex = this.chunkIndex.get(sessionId) || 0; if (currentIndex >= chunks.length) { return { content: [ { type: 'text', text: JSON.stringify({ success: true, chunk: null, chunkIndex: currentIndex, totalChunks: chunks.length, hasMore: false, message: 'All chunks have been processed' }, null, 2), }, ], }; } const currentChunk = chunks[currentIndex]; this.chunkIndex.set(sessionId, currentIndex + 1); return { content: [ { type: 'text', text: JSON.stringify({ success: true, chunk: currentChunk, chunkIndex: currentIndex, totalChunks: chunks.length, hasMore: currentIndex + 1 < chunks.length, message: `Retrieved chunk ${currentIndex + 1} of ${chunks.length}`, nextInstruction: currentIndex + 1 < chunks.length ? `Call get_next_chunk again to get chunk ${currentIndex + 2}` : 'All chunks have been retrieved' }, null, 2), }, ], }; }
- src/mcp/server.ts:194-242 (registration)Registration of the get_next_chunk tool in the MCP server's tool list, including name, detailed description, and input schema definition.{ name: 'get_next_chunk', description: `š¦ CHUNK RETRIEVAL FOR TRANSLATION WORKFLOW š¦ šÆ PURPOSE: Retrieves the next chunk from memory for sequential processing. Use this after detect_conversations with storeInMemory=true. š HOW IT WORKS: - Automatically tracks which chunk to return next - Returns actual chunk data with subtitle text content - Advances to next chunk automatically - Returns null when all chunks processed š„ PARAMETERS: - sessionId: Session ID from detect_conversations response š¤ RETURNS: - chunk: Complete chunk data with subtitle text (or null if done) - chunkIndex: Current chunk number (0-based) - totalChunks: Total chunks available - hasMore: Boolean indicating if more chunks exist - message: Status message š” USAGE PATTERN: 1. Call detect_conversations with storeInMemory=true 2. Get sessionId from response 3. Call get_next_chunk repeatedly until hasMore=false 4. Process each chunk for translation 5. Use translate_srt() on individual chunks š EXAMPLE: {"sessionId": "srt-session-123456789"} ā ļø NOTE: - Each call advances to the next chunk automatically - Store sessionId from detect_conversations response - Use this for chunk-by-chunk processing of large files`, inputSchema: { type: 'object', properties: { sessionId: { type: 'string', description: 'Session ID from detect_conversations with storeInMemory=true', }, }, required: ['sessionId'], }, },
- src/mcp/server.ts:399-400 (registration)Dispatch routing in the CallToolRequestSchema handler that maps the 'get_next_chunk' tool call to the handleGetNextChunk method.case 'get_next_chunk': return await this.handleGetNextChunk(args);
- src/mcp/server.ts:69-70 (helper)Class properties used by the handler to store chunks per session and track the current chunk index.private chunkMemory = new Map<string, any>(); // Store chunks by session ID private chunkIndex = new Map<string, number>(); // Track current chunk index per session