Skip to main content
Glama

SRT Translation MCP Server

by omd0
AI_WORKFLOW_EXAMPLE.md3.66 kB
# AI Workflow Example for SRT Translation ## Complete Workflow for AI Agents This MCP server provides a complete workflow for AI agents to analyze, chunk, and translate SRT files while tracking progress with the TODO tool. ## Step-by-Step Workflow ### 1. ANALYZE SRT File (with Memory Storage) ```json { "tool": "detect_conversations", "parameters": { "content": "1\n00:00:02,000 --> 00:00:07,000\nHello world", "storeInMemory": true, "createTodos": true } } ``` **What it returns:** File structure, language detection, speaker info, chunk metadata, sessionId, todos ### 2. OPTIMIZE CHUNKS ```json { "tool": "context_optimization", "parameters": { "content": "1\n00:00:02,000 --> 00:00:07,000\nHello world", "optimizationType": "optimize", "maxContextSize": 50000, "maxChunkSize": 8 } } ``` **What it does:** Splits large chunks, merges small ones, optimizes for AI processing ### 3. CREATE TODO TASKS ```json { "tool": "todo_management", "parameters": { "action": "create", "taskType": "translate", "title": "Translate SRT to Spanish", "priority": "high" } } ``` **What it does:** Creates task to track translation progress ### 4. PROCESS CHUNKS ONE BY ONE ```json { "tool": "get_next_chunk", "parameters": { "sessionId": "srt-session-123" } } ``` **What it returns:** Next chunk data, chunk index, total chunks, hasMore flag ### 5. TRANSLATE EACH CHUNK ```json { "tool": "translate_srt", "parameters": { "content": "1\n00:00:02,000 --> 00:00:07,000\nHello world", "targetLanguage": "es" } } ``` **What it does:** Translates individual chunk content to target language ### 6. UPDATE TODO STATUS ```json { "tool": "todo_management", "parameters": { "action": "complete", "taskId": "task-123" } } ``` **What it does:** Marks task as completed ## Alternative: Use Unified AI Processing For a single-step approach with any AI model: ```json { "tool": "unified_ai_process", "parameters": { "content": "1\n00:00:02,000 --> 00:00:07,000\nHello world", "model": "claude", "targetLanguage": "es", "enableTodoTracking": true } } ``` ## Key Benefits - **Automatic Chunking:** Prevents AI context limit issues - **Progress Tracking:** TODO tool manages workflow - **Multi-Model Support:** Works with Claude, GPT, Gemini - **Error Handling:** Robust error messages and recovery - **Context Optimization:** Intelligent chunk management ## Complete Workflow Example Here's a complete example of how an AI agent should process a large SRT file: ```json // Step 1: Analyze and store chunks in memory { "tool": "detect_conversations", "parameters": { "content": "large_srt_file_content", "storeInMemory": true, "createTodos": true } } // Step 2: Process chunks one by one (repeat until hasMore: false) { "tool": "get_next_chunk", "parameters": { "sessionId": "srt-session-123" } } // Step 3: Translate each chunk { "tool": "translate_srt", "parameters": { "content": "chunk_content_from_get_next_chunk", "targetLanguage": "es" } } // Step 4: Update TODO status { "tool": "todo_management", "parameters": { "action": "complete", "taskId": "chunk_task_id" } } ``` ## Quick Start 1. Start with `detect_conversations(storeInMemory=true, createTodos=true)` to analyze and store chunks 2. Use `get_next_chunk()` to retrieve chunks one by one 3. Translate each chunk with `translate_srt()` 4. Update TODO status as you progress 5. Repeat until `hasMore: false` This workflow ensures efficient, trackable SRT translation with proper context management and no context limit issues!

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/omd0/srt-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server