todo_management
Manage and track tasks for SRT file processing workflows including parsing, translation, quality checks, and output generation. Create, update, and monitor task progress across different processing stages.
Instructions
Manage tasks for SRT processing workflows.
WHAT IT DOES:
Create, update, and track tasks during SRT processing
Monitor progress across different processing stages
Manage task priorities and dependencies
ACTIONS:
create: Create a new task
update: Update task status
complete: Mark task as completed
list: List all tasks
get_status: Get overall task status
TASK TYPES:
srt_parse: Parse and validate SRT file
conversation_detect: Detect conversation chunks
chunk_optimize: Optimize chunks for AI processing
ai_process: Process with AI model
translate: Translate content
quality_check: Quality assurance
output_generate: Generate final output
EXAMPLE USAGE:
Create task: {"action": "create", "taskType": "srt_parse", "title": "Parse SRT file", "priority": "high"}
Update status: {"action": "update", "taskId": "task-123", "status": "completed"}
List tasks: {"action": "list"}
Get status: {"action": "get_status"}
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | Todo action to perform | |
| description | No | Task description | |
| metadata | No | Additional task metadata | |
| priority | No | Task priority | medium |
| status | No | Task status (for update action) | |
| taskId | No | Task ID (for update/complete actions) | |
| taskType | No | Type of task | |
| title | No | Task title |
Implementation Reference
- src/mcp/server.ts:725-806 (handler)Primary execution handler for the todo_management tool. Handles all actions (create, update, complete, list, get_status) by destructuring input args and delegating core logic to the SRTProcessingTodoManager instance. Returns structured MCP responses.private async handleTodoManagement(args: any) { const { action, taskType, title, description, priority, taskId, status, metadata } = args; switch (action) { case 'create': if (!taskType || !title) { throw new Error('taskType and title are required for create action'); } const todo = await this.todoManager.createSRTProcessingTodos( title, 1, 'translation' ); return { content: [ { type: 'text', text: JSON.stringify({ success: true, todo }, null, 2), }, ], }; case 'update': if (!taskId || !status) { throw new Error('taskId and status are required for update action'); } // Note: SRTProcessingTodoManager doesn't have direct update method // This is a placeholder for the actual implementation console.log(`Updating todo ${taskId} to status ${status}`); return { content: [ { type: 'text', text: JSON.stringify({ success: true, message: 'Todo updated' }, null, 2), }, ], }; case 'complete': if (!taskId) { throw new Error('taskId is required for complete action'); } // Note: SRTProcessingTodoManager doesn't have direct complete method // This is a placeholder for the actual implementation console.log(`Completing todo ${taskId}`); return { content: [ { type: 'text', text: JSON.stringify({ success: true, message: 'Todo completed' }, null, 2), }, ], }; case 'list': // Use the todo manager's getTodosByStage method const todos = await this.todoManager.getTodosByStage('all'); return { content: [ { type: 'text', text: JSON.stringify({ success: true, todos }, null, 2), }, ], }; case 'get_status': // Use the todo manager's getProcessingStatistics method const statistics = await this.todoManager.getProcessingStatistics(); return { content: [ { type: 'text', text: JSON.stringify({ success: true, status: statistics }, null, 2), }, ], }; default: throw new Error(`Unknown action: ${action}`); } }
- src/mcp/server.ts:339-381 (schema)MCP input schema for todo_management tool, defining action enum and supporting parameters for task management operations.inputSchema: { type: 'object', properties: { action: { type: 'string', enum: ['create', 'update', 'complete', 'list', 'get_status'], description: 'Todo action to perform', }, taskType: { type: 'string', enum: ['srt_parse', 'conversation_detect', 'chunk_optimize', 'ai_process', 'translate', 'quality_check', 'output_generate'], description: 'Type of task', }, title: { type: 'string', description: 'Task title', }, description: { type: 'string', description: 'Task description', }, priority: { type: 'string', enum: ['low', 'medium', 'high', 'urgent'], description: 'Task priority', default: 'medium', }, taskId: { type: 'string', description: 'Task ID (for update/complete actions)', }, status: { type: 'string', enum: ['pending', 'in_progress', 'completed', 'failed', 'cancelled'], description: 'Task status (for update action)', }, metadata: { type: 'object', description: 'Additional task metadata', }, }, required: ['action'], },
- src/mcp/server.ts:309-382 (registration)Tool registration in ListToolsRequestSchema response, including name, detailed description, and reference to input schema.{ name: 'todo_management', description: `Manage tasks for SRT processing workflows. WHAT IT DOES: - Create, update, and track tasks during SRT processing - Monitor progress across different processing stages - Manage task priorities and dependencies ACTIONS: - create: Create a new task - update: Update task status - complete: Mark task as completed - list: List all tasks - get_status: Get overall task status TASK TYPES: - srt_parse: Parse and validate SRT file - conversation_detect: Detect conversation chunks - chunk_optimize: Optimize chunks for AI processing - ai_process: Process with AI model - translate: Translate content - quality_check: Quality assurance - output_generate: Generate final output EXAMPLE USAGE: 1. Create task: {"action": "create", "taskType": "srt_parse", "title": "Parse SRT file", "priority": "high"} 2. Update status: {"action": "update", "taskId": "task-123", "status": "completed"} 3. List tasks: {"action": "list"} 4. Get status: {"action": "get_status"}`, inputSchema: { type: 'object', properties: { action: { type: 'string', enum: ['create', 'update', 'complete', 'list', 'get_status'], description: 'Todo action to perform', }, taskType: { type: 'string', enum: ['srt_parse', 'conversation_detect', 'chunk_optimize', 'ai_process', 'translate', 'quality_check', 'output_generate'], description: 'Type of task', }, title: { type: 'string', description: 'Task title', }, description: { type: 'string', description: 'Task description', }, priority: { type: 'string', enum: ['low', 'medium', 'high', 'urgent'], description: 'Task priority', default: 'medium', }, taskId: { type: 'string', description: 'Task ID (for update/complete actions)', }, status: { type: 'string', enum: ['pending', 'in_progress', 'completed', 'failed', 'cancelled'], description: 'Task status (for update action)', }, metadata: { type: 'object', description: 'Additional task metadata', }, }, required: ['action'], }, },
- Core helper class SRTProcessingTodoManager instantiated in server and used by handler for todo operations like creating SRT-specific todo lists, retrieving statistics, and listing todos by stage.export class SRTProcessingTodoManager { private todoTool: TodoToolInterface; private modelType: string; constructor(modelType: string) { this.modelType = modelType; this.todoTool = TodoToolFactory.createTodoTool(modelType); } /** * Create comprehensive SRT processing todos */ async createSRTProcessingTodos( fileName: string, chunkCount: number, processingType: 'translation' | 'analysis' | 'conversation-detection', targetLanguage?: string ): Promise<TodoListResult> { const todos: Omit<TodoItem, 'id' | 'createdAt' | 'updatedAt'>[] = []; // Add file analysis todos todos.push(...SRTTodoTemplates.createFileAnalysisTodos(fileName)); // Add chunk detection todos todos.push(...SRTTodoTemplates.createChunkDetectionTodos(chunkCount)); // Add chunk optimization todos todos.push(...SRTTodoTemplates.createChunkOptimizationTodos(chunkCount, this.modelType)); // Add processing-specific todos switch (processingType) { case 'translation': if (targetLanguage) { todos.push(...SRTTodoTemplates.createTranslationTodos(chunkCount, targetLanguage)); } break; case 'analysis': todos.push(...SRTTodoTemplates.createAnalysisTodos(chunkCount)); break; case 'conversation-detection': // Already covered by chunk detection todos break; } // Add validation todos todos.push(...SRTTodoTemplates.createValidationTodos(chunkCount)); return this.todoTool.createTodoList(todos as TodoItem[]); } /** * Update processing progress */ async updateProcessingProgress( stage: 'file-analysis' | 'chunk-detection' | 'chunk-optimization' | 'processing' | 'validation', status: TodoStatus ): Promise<void> { const todos = await this.todoTool.getTodoList(); const stageTodos = todos.filter(todo => todo.metadata?.processingContext?.processingType === stage ); for (const todo of stageTodos) { await this.todoTool.updateTodoStatus(todo.id, status); } } /** * Get processing statistics */ async getProcessingStatistics(): Promise<TodoStatistics> { return this.todoTool.getTodoStatistics(); } /** * Get todos by processing stage */ async getTodosByStage(stage: string): Promise<TodoItem[]> { const todos = await this.todoTool.getTodoList(); if (stage === 'all') { return todos; } return todos.filter(todo => todo.metadata?.processingContext?.processingType === stage ); } }
- src/mcp/server.ts:71-806 (helper)Instantiation of the shared SRTProcessingTodoManager instance used throughout the server for todo management.private todoManager = new SRTProcessingTodoManager('generic'); // Shared TODO manager constructor() { this.server = new Server( { name: 'srt-translation-mcp-server', version: '1.0.0', }, { capabilities: { tools: {}, }, } ); this.setupToolHandlers(); } private setupToolHandlers() { // List available tools this.server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: [ { name: 'parse_srt', description: 'Parse SRT file content and return structured data', inputSchema: { type: 'object', properties: { content: { type: 'string', description: 'SRT file content to parse', }, }, required: ['content'], }, }, { name: 'write_srt', description: 'Write SRT data to file format', inputSchema: { type: 'object', properties: { srtData: { type: 'object', description: 'SRT data object to write', }, }, required: ['srtData'], }, }, { name: 'detect_conversations', description: `π CHUNK-BASED TRANSLATION WORKFLOW INSTRUCTIONS π π OVERVIEW: This tool analyzes SRT files and creates intelligent chunks for efficient translation. It returns METADATA ONLY - use get_next_chunk() and translate_srt() for actual content. π WHAT IT DOES: - SMART INPUT: Auto-detects file paths vs SRT content - Creates small chunks (1-3 subtitles each) optimized for AI processing - Detects languages (Arabic, English, Spanish, French) per chunk - Identifies speakers and conversation boundaries - Provides translation priority rankings (high/medium/low) - Stores chunks in memory to avoid context limits - Creates individual TODO tasks for tracking progress π WHAT IT RETURNS (SMALL RESPONSE): - chunkCount: Total number of chunks created - totalDuration: File duration in milliseconds - languageDistribution: Language counts (e.g., {"ar": 45, "en": 12}) - previewChunk: Preview of first chunk metadata only - sessionId: For retrieving chunks later - message: Instructions for next steps - todos: Individual tasks for each chunk π― RECOMMENDED WORKFLOW: 1. Call detect_conversations with storeInMemory=true 2. Review metadata to understand file structure (SMALL RESPONSE) 3. Use get_next_chunk to process chunks one by one 4. Use translate_srt() for actual translation 5. Track progress with todo_management π‘ EXAMPLES: File Path Input: {"content": "/path/to/file.srt", "storeInMemory": true, "createTodos": true} SRT Content Input: {"content": "1\\n00:00:02,000 --> 00:00:07,000\\nHello world", "storeInMemory": true} β οΈ IMPORTANT: - This returns METADATA ONLY - no actual text content - Response is SMALL to avoid context overflow - Use get_next_chunk() to retrieve individual chunks - Use translate_srt() for actual translation - Store chunks in memory for large files to avoid context limits`, inputSchema: { type: 'object', properties: { content: { type: 'string', description: 'SRT file content OR file path to analyze (auto-detected)', }, storeInMemory: { type: 'boolean', description: 'Store chunks in memory to avoid context limits (default: false)', default: false, }, sessionId: { type: 'string', description: 'Session ID for memory storage (optional, auto-generated if not provided)', }, createTodos: { type: 'boolean', description: 'Create individual TODO tasks for each chunk (default: false)', default: false, }, }, required: ['content'], }, }, { name: 'get_next_chunk', description: `π¦ CHUNK RETRIEVAL FOR TRANSLATION WORKFLOW π¦ π― PURPOSE: Retrieves the next chunk from memory for sequential processing. Use this after detect_conversations with storeInMemory=true. π HOW IT WORKS: - Automatically tracks which chunk to return next - Returns actual chunk data with subtitle text content - Advances to next chunk automatically - Returns null when all chunks processed π₯ PARAMETERS: - sessionId: Session ID from detect_conversations response π€ RETURNS: - chunk: Complete chunk data with subtitle text (or null if done) - chunkIndex: Current chunk number (0-based) - totalChunks: Total chunks available - hasMore: Boolean indicating if more chunks exist - message: Status message π‘ USAGE PATTERN: 1. Call detect_conversations with storeInMemory=true 2. Get sessionId from response 3. Call get_next_chunk repeatedly until hasMore=false 4. Process each chunk for translation 5. Use translate_srt() on individual chunks π EXAMPLE: {"sessionId": "srt-session-123456789"} β οΈ NOTE: - Each call advances to the next chunk automatically - Store sessionId from detect_conversations response - Use this for chunk-by-chunk processing of large files`, inputSchema: { type: 'object', properties: { sessionId: { type: 'string', description: 'Session ID from detect_conversations with storeInMemory=true', }, }, required: ['sessionId'], }, }, { name: 'translate_srt', description: `π SRT TRANSLATION HELPER TOOL π π¨ CRITICAL: THIS IS A HELPER TOOL ONLY - AI DOES THE TRANSLATION! π¨ π― PURPOSE: This tool helps prepare SRT content for AI translation but DOES NOT translate text itself. The AI assistant must perform the actual translation work. π WHAT IT DOES: - Parses SRT content and extracts subtitle text for AI translation - Preserves timing and formatting structure - Returns structured data for AI to translate - Provides context and metadata for better translation β WHAT IT DOES NOT DO: - β Does NOT translate text automatically - β Does NOT return translated content - β Does NOT perform any AI translation β WHAT IT RETURNS: - Structured SRT data with original text - Timing and formatting information - Translation context and metadata - Ready-to-translate format for AI π RECOMMENDED WORKFLOW: 1. Use detect_conversations to analyze file structure 2. Use get_next_chunk to get individual chunks 3. Use translate_srt to prepare chunk for AI translation 4. AI assistant translates the text content 5. AI assistant combines results into final SRT file π‘ USAGE PATTERNS: Prepare Full File for Translation: {"content": "full SRT content", "targetLanguage": "es", "sourceLanguage": "en"} Prepare Individual Chunk for Translation: {"content": "chunk SRT content", "targetLanguage": "es", "sourceLanguage": "en"} β οΈ CRITICAL INSTRUCTIONS: - This tool ONLY prepares content for AI translation - AI assistant must do the actual text translation - Use this to get structured data, then translate with AI - Return format is ready for AI processing`, inputSchema: { type: 'object', properties: { content: { type: 'string', description: 'SRT file content to translate', }, targetLanguage: { type: 'string', description: 'Target language code (e.g., es, fr, de)', }, sourceLanguage: { type: 'string', description: 'Source language code (optional, auto-detect if not provided)', }, }, required: ['content', 'targetLanguage'], }, }, { name: 'todo_management', description: `Manage tasks for SRT processing workflows. WHAT IT DOES: - Create, update, and track tasks during SRT processing - Monitor progress across different processing stages - Manage task priorities and dependencies ACTIONS: - create: Create a new task - update: Update task status - complete: Mark task as completed - list: List all tasks - get_status: Get overall task status TASK TYPES: - srt_parse: Parse and validate SRT file - conversation_detect: Detect conversation chunks - chunk_optimize: Optimize chunks for AI processing - ai_process: Process with AI model - translate: Translate content - quality_check: Quality assurance - output_generate: Generate final output EXAMPLE USAGE: 1. Create task: {"action": "create", "taskType": "srt_parse", "title": "Parse SRT file", "priority": "high"} 2. Update status: {"action": "update", "taskId": "task-123", "status": "completed"} 3. List tasks: {"action": "list"} 4. Get status: {"action": "get_status"}`, inputSchema: { type: 'object', properties: { action: { type: 'string', enum: ['create', 'update', 'complete', 'list', 'get_status'], description: 'Todo action to perform', }, taskType: { type: 'string', enum: ['srt_parse', 'conversation_detect', 'chunk_optimize', 'ai_process', 'translate', 'quality_check', 'output_generate'], description: 'Type of task', }, title: { type: 'string', description: 'Task title', }, description: { type: 'string', description: 'Task description', }, priority: { type: 'string', enum: ['low', 'medium', 'high', 'urgent'], description: 'Task priority', default: 'medium', }, taskId: { type: 'string', description: 'Task ID (for update/complete actions)', }, status: { type: 'string', enum: ['pending', 'in_progress', 'completed', 'failed', 'cancelled'], description: 'Task status (for update action)', }, metadata: { type: 'object', description: 'Additional task metadata', }, }, required: ['action'], }, }, ] as Tool[], }; }); // Handle tool calls this.server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params; try { switch (name) { case 'parse_srt': return await this.handleParseSRT(args); case 'write_srt': return await this.handleWriteSRT(args); case 'detect_conversations': return await this.handleDetectConversations(args); case 'get_next_chunk': return await this.handleGetNextChunk(args); case 'translate_srt': return await this.handleTranslateSRT(args); case 'todo_management': return await this.handleTodoManagement(args); default: throw new Error(`Unknown tool: ${name}`); } } catch (error) { return { content: [ { type: 'text', text: `Error: ${error instanceof Error ? error.message : 'Unknown error'}`, }, ], isError: true, }; } }); } private async handleParseSRT(args: any) { const { content } = args; const parseResult = parseSRTFile(content); if (!parseResult.success || !parseResult.file) { const errorDetails = parseResult.errors?.map(e => `${e.type}: ${e.message}`).join(', ') || 'Unknown parsing error'; throw new Error(`Failed to parse SRT file: ${errorDetails}`); } return { content: [ { type: 'text', text: JSON.stringify(parseResult.file, null, 2), }, ], }; } private async handleWriteSRT(args: any) { const { srtData } = args; const srtContent = writeSRTFile(srtData); return { content: [ { type: 'text', text: srtContent, }, ], }; } private async handleDetectConversations(args: any) { const { content: inputContent, storeInMemory = false, sessionId, createTodos = false } = args; // Smart input detection: check if content is a file path or SRT content let content = inputContent; let srtContent = content; // Smart file detection: check if content is a file path or SRT content let isFilePath = false; // Check if content looks like a file path if (content.endsWith('.srt')) { // Try as absolute path first if (content.startsWith('/') && existsSync(content)) { isFilePath = true; } // Try as relative path from current directory else if (existsSync(content)) { isFilePath = true; } // Try as relative path from project root else if (existsSync(join(process.cwd(), content))) { const fullPath = join(process.cwd(), content); content = fullPath; isFilePath = true; } } if (isFilePath) { try { srtContent = readFileSync(content, 'utf8'); console.log(`π Reading SRT file from path: ${content}`); } catch (error) { throw new Error(`Failed to read file ${content}: ${error instanceof Error ? error.message : 'Unknown error'}`); } } else { // Check if content looks like SRT format (has subtitle blocks) if (content.trim().length === 0) { throw new Error('Empty content provided'); } // If it's just a filename without path, try to find it if (content.endsWith('.srt') && !content.includes('/')) { const possiblePaths = [ content, join(process.cwd(), content), join(process.cwd(), 'examples', content), join(process.cwd(), 'samples', content) ]; for (const path of possiblePaths) { if (existsSync(path)) { try { srtContent = readFileSync(path, 'utf8'); console.log(`π Found SRT file at: ${path}`); break; } catch (error) { continue; } } } if (!srtContent) { throw new Error(`File not found: ${content}. Searched in: ${possiblePaths.join(', ')}`); } } else { console.log(`π Processing SRT content directly (${content.length} characters)`); srtContent = content; } } const parseResult = parseSRTFile(srtContent); if (!parseResult.success || !parseResult.file) { const errorDetails = parseResult.errors?.map(e => `${e.type}: ${e.message}`).join(', ') || 'Unknown parsing error'; throw new Error(`Failed to parse SRT file: ${errorDetails}`); } // Use advanced conversation detection with MUCH SMALLER chunks for AI processing const chunks = detectConversationsAdvanced(parseResult.file.subtitles, { boundaryThreshold: 0.1, // Very low threshold for maximum chunks minChunkSize: 1, // Allow single subtitle chunks maxChunkSize: 3, // VERY small max chunk size (1-3 subtitles) enableSpeakerDiarization: true, enableSemanticAnalysis: true, }); // Create metadata for AI const chunkMetadata = chunks.map((chunk, index) => { const firstSubtitle = chunk.subtitles[0]; const lastSubtitle = chunk.subtitles[chunk.subtitles.length - 1]; const languageInfo = this.detectLanguage(chunk); return { id: chunk.id, startIndex: chunk.startIndex, endIndex: chunk.endIndex, startTime: firstSubtitle.startTime, endTime: lastSubtitle.endTime, duration: (lastSubtitle.endTime as unknown as number) - (firstSubtitle.startTime as unknown as number), subtitleCount: chunk.subtitles.length, speaker: chunk.context?.speaker || 'Unknown', languageInfo: languageInfo, translationPriority: 'medium', contentType: 'dialogue', topicKeywords: [], complexity: 'medium', }; }); // Return only metadata about chunks, not the actual chunk data const result = { content: [ { type: 'text', text: JSON.stringify({ chunkCount: chunks.length, totalDuration: parseResult.file.subtitles.reduce((total, sub) => total + ((sub.endTime as unknown as number) - (sub.startTime as unknown as number)), 0), languageDistribution: this.analyzeLanguageDistribution(chunks), speakerDistribution: this.analyzeSpeakerDistribution(chunks), // Return only first chunk metadata as preview, not all chunks previewChunk: chunkMetadata.length > 0 ? chunkMetadata[0] : null, message: `Detected ${chunks.length} chunks. Use get_next_chunk to retrieve individual chunks.`, validationStatus: { isValid: chunks.length > 0, errors: [], warnings: [], } }, null, 2), }, ], }; // Store chunks in memory if requested if (storeInMemory) { const actualSessionId = sessionId || `srt-session-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; this.chunkMemory.set(actualSessionId, chunks); this.chunkIndex.set(actualSessionId, 0); // Add session info to response const responseData = JSON.parse(result.content[0].text); responseData.sessionId = actualSessionId; result.content[0].text = JSON.stringify(responseData, null, 2); } // Create TODO tasks if requested if (createTodos) { const todos = await this.todoManager.createSRTProcessingTodos( 'srt_file', chunks.length, 'translation' ); // Add todos info to response const responseData = JSON.parse(result.content[0].text); responseData.todos = todos; result.content[0].text = JSON.stringify(responseData, null, 2); } return result; } private async handleGetNextChunk(args: any) { const { sessionId } = args; if (!this.chunkMemory.has(sessionId)) { throw new Error(`Session ${sessionId} not found in memory`); } const chunks = this.chunkMemory.get(sessionId); const currentIndex = this.chunkIndex.get(sessionId) || 0; if (currentIndex >= chunks.length) { return { content: [ { type: 'text', text: JSON.stringify({ success: true, chunk: null, chunkIndex: currentIndex, totalChunks: chunks.length, hasMore: false, message: 'All chunks have been processed' }, null, 2), }, ], }; } const currentChunk = chunks[currentIndex]; this.chunkIndex.set(sessionId, currentIndex + 1); return { content: [ { type: 'text', text: JSON.stringify({ success: true, chunk: currentChunk, chunkIndex: currentIndex, totalChunks: chunks.length, hasMore: currentIndex + 1 < chunks.length, message: `Retrieved chunk ${currentIndex + 1} of ${chunks.length}`, nextInstruction: currentIndex + 1 < chunks.length ? `Call get_next_chunk again to get chunk ${currentIndex + 2}` : 'All chunks have been retrieved' }, null, 2), }, ], }; } private async handleTranslateSRT(args: any) { const { content, targetLanguage, sourceLanguage } = args; const parseResult = parseSRTFile(content); if (!parseResult.success || !parseResult.file) { const errorDetails = parseResult.errors?.map(e => `${e.type}: ${e.message}`).join(', ') || 'Unknown parsing error'; throw new Error(`Failed to parse SRT file: ${errorDetails}`); } // Return structured data for AI translation - DO NOT TRANSLATE HERE const translationData = { originalSRT: content, parsedData: parseResult.file, translationInstructions: { targetLanguage: targetLanguage, sourceLanguage: sourceLanguage || 'auto', preserveTiming: true, preserveFormatting: true, context: 'subtitle_translation' }, subtitles: parseResult.file.subtitles.map((subtitle, index) => ({ index: index, id: `subtitle-${subtitle.index}`, startTime: subtitle.startTime, endTime: subtitle.endTime, originalText: subtitle.text, translationInstructions: { preserveTags: true, maintainTone: 'conversational', context: `Subtitle ${index + 1} of ${parseResult.file!.subtitles.length}` } })), aiTranslationTask: { description: `Translate ${parseResult.file!.subtitles.length} subtitles from ${sourceLanguage || 'auto'} to ${targetLanguage}`, instructions: [ 'Translate each subtitle text while preserving timing and formatting', 'Maintain conversational tone and context', 'Preserve any HTML tags or formatting', 'Return complete translated SRT file' ], expectedOutput: 'Complete translated SRT file with all timing preserved' } }; return { content: [ { type: 'text', text: JSON.stringify(translationData, null, 2), }, ], }; } private async handleTodoManagement(args: any) { const { action, taskType, title, description, priority, taskId, status, metadata } = args; switch (action) { case 'create': if (!taskType || !title) { throw new Error('taskType and title are required for create action'); } const todo = await this.todoManager.createSRTProcessingTodos( title, 1, 'translation' ); return { content: [ { type: 'text', text: JSON.stringify({ success: true, todo }, null, 2), }, ], }; case 'update': if (!taskId || !status) { throw new Error('taskId and status are required for update action'); } // Note: SRTProcessingTodoManager doesn't have direct update method // This is a placeholder for the actual implementation console.log(`Updating todo ${taskId} to status ${status}`); return { content: [ { type: 'text', text: JSON.stringify({ success: true, message: 'Todo updated' }, null, 2), }, ], }; case 'complete': if (!taskId) { throw new Error('taskId is required for complete action'); } // Note: SRTProcessingTodoManager doesn't have direct complete method // This is a placeholder for the actual implementation console.log(`Completing todo ${taskId}`); return { content: [ { type: 'text', text: JSON.stringify({ success: true, message: 'Todo completed' }, null, 2), }, ], }; case 'list': // Use the todo manager's getTodosByStage method const todos = await this.todoManager.getTodosByStage('all'); return { content: [ { type: 'text', text: JSON.stringify({ success: true, todos }, null, 2), }, ], }; case 'get_status': // Use the todo manager's getProcessingStatistics method const statistics = await this.todoManager.getProcessingStatistics(); return { content: [ { type: 'text', text: JSON.stringify({ success: true, status: statistics }, null, 2), }, ], }; default: throw new Error(`Unknown action: ${action}`); } }