Skip to main content
Glama

get_next_chunk

Retrieves the next subtitle chunk for sequential translation processing after conversation detection, enabling chunk-by-chunk handling of large SRT files.

Instructions

šŸ“¦ CHUNK RETRIEVAL FOR TRANSLATION WORKFLOW šŸ“¦

šŸŽÆ PURPOSE: Retrieves the next chunk from memory for sequential processing. Use this after detect_conversations with storeInMemory=true.

šŸ”„ HOW IT WORKS:

  • Automatically tracks which chunk to return next

  • Returns actual chunk data with subtitle text content

  • Advances to next chunk automatically

  • Returns null when all chunks processed

šŸ“„ PARAMETERS:

  • sessionId: Session ID from detect_conversations response

šŸ“¤ RETURNS:

  • chunk: Complete chunk data with subtitle text (or null if done)

  • chunkIndex: Current chunk number (0-based)

  • totalChunks: Total chunks available

  • hasMore: Boolean indicating if more chunks exist

  • message: Status message

šŸ’” USAGE PATTERN:

  1. Call detect_conversations with storeInMemory=true

  2. Get sessionId from response

  3. Call get_next_chunk repeatedly until hasMore=false

  4. Process each chunk for translation

  5. Use translate_srt() on individual chunks

šŸ“‹ EXAMPLE: {"sessionId": "srt-session-123456789"}

āš ļø NOTE:

  • Each call advances to the next chunk automatically

  • Store sessionId from detect_conversations response

  • Use this for chunk-by-chunk processing of large files

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdYesSession ID from detect_conversations with storeInMemory=true

Implementation Reference

  • The core handler function that implements the get_next_chunk MCP tool. It retrieves the next SRT chunk from session-specific memory, returns chunk data or null if done, updates the current index, and provides status information.
    private async handleGetNextChunk(args: any) { const { sessionId } = args; if (!this.chunkMemory.has(sessionId)) { throw new Error(`Session ${sessionId} not found in memory`); } const chunks = this.chunkMemory.get(sessionId); const currentIndex = this.chunkIndex.get(sessionId) || 0; if (currentIndex >= chunks.length) { return { content: [ { type: 'text', text: JSON.stringify({ success: true, chunk: null, chunkIndex: currentIndex, totalChunks: chunks.length, hasMore: false, message: 'All chunks have been processed' }, null, 2), }, ], }; } const currentChunk = chunks[currentIndex]; this.chunkIndex.set(sessionId, currentIndex + 1); return { content: [ { type: 'text', text: JSON.stringify({ success: true, chunk: currentChunk, chunkIndex: currentIndex, totalChunks: chunks.length, hasMore: currentIndex + 1 < chunks.length, message: `Retrieved chunk ${currentIndex + 1} of ${chunks.length}`, nextInstruction: currentIndex + 1 < chunks.length ? `Call get_next_chunk again to get chunk ${currentIndex + 2}` : 'All chunks have been retrieved' }, null, 2), }, ], }; }
  • Registration of the get_next_chunk tool in the MCP server's tool list, including name, detailed description, and input schema definition.
    { name: 'get_next_chunk', description: `šŸ“¦ CHUNK RETRIEVAL FOR TRANSLATION WORKFLOW šŸ“¦ šŸŽÆ PURPOSE: Retrieves the next chunk from memory for sequential processing. Use this after detect_conversations with storeInMemory=true. šŸ”„ HOW IT WORKS: - Automatically tracks which chunk to return next - Returns actual chunk data with subtitle text content - Advances to next chunk automatically - Returns null when all chunks processed šŸ“„ PARAMETERS: - sessionId: Session ID from detect_conversations response šŸ“¤ RETURNS: - chunk: Complete chunk data with subtitle text (or null if done) - chunkIndex: Current chunk number (0-based) - totalChunks: Total chunks available - hasMore: Boolean indicating if more chunks exist - message: Status message šŸ’” USAGE PATTERN: 1. Call detect_conversations with storeInMemory=true 2. Get sessionId from response 3. Call get_next_chunk repeatedly until hasMore=false 4. Process each chunk for translation 5. Use translate_srt() on individual chunks šŸ“‹ EXAMPLE: {"sessionId": "srt-session-123456789"} āš ļø NOTE: - Each call advances to the next chunk automatically - Store sessionId from detect_conversations response - Use this for chunk-by-chunk processing of large files`, inputSchema: { type: 'object', properties: { sessionId: { type: 'string', description: 'Session ID from detect_conversations with storeInMemory=true', }, }, required: ['sessionId'], }, },
  • Dispatch routing in the CallToolRequestSchema handler that maps the 'get_next_chunk' tool call to the handleGetNextChunk method.
    case 'get_next_chunk': return await this.handleGetNextChunk(args);
  • Class properties used by the handler to store chunks per session and track the current chunk index.
    private chunkMemory = new Map<string, any>(); // Store chunks by session ID private chunkIndex = new Map<string, number>(); // Track current chunk index per session

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/omd0/srt-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server