Skip to main content
Glama

SRT Translation MCP Server

by omd0

detect_conversations

Analyzes SRT subtitle files to detect conversations, identify languages, and create optimized chunks for translation workflow. Returns metadata for efficient processing of subtitle content.

Instructions

šŸš€ CHUNK-BASED TRANSLATION WORKFLOW INSTRUCTIONS šŸš€

šŸ“‹ OVERVIEW: This tool analyzes SRT files and creates intelligent chunks for efficient translation. It returns METADATA ONLY - use get_next_chunk() and translate_srt() for actual content.

šŸ” WHAT IT DOES:

  • SMART INPUT: Auto-detects file paths vs SRT content

  • Creates small chunks (1-3 subtitles each) optimized for AI processing

  • Detects languages (Arabic, English, Spanish, French) per chunk

  • Identifies speakers and conversation boundaries

  • Provides translation priority rankings (high/medium/low)

  • Stores chunks in memory to avoid context limits

  • Creates individual TODO tasks for tracking progress

šŸ“Š WHAT IT RETURNS (SMALL RESPONSE):

  • chunkCount: Total number of chunks created

  • totalDuration: File duration in milliseconds

  • languageDistribution: Language counts (e.g., {"ar": 45, "en": 12})

  • previewChunk: Preview of first chunk metadata only

  • sessionId: For retrieving chunks later

  • message: Instructions for next steps

  • todos: Individual tasks for each chunk

šŸŽÆ RECOMMENDED WORKFLOW:

  1. Call detect_conversations with storeInMemory=true

  2. Review metadata to understand file structure (SMALL RESPONSE)

  3. Use get_next_chunk to process chunks one by one

  4. Use translate_srt() for actual translation

  5. Track progress with todo_management

šŸ’” EXAMPLES:

File Path Input: {"content": "/path/to/file.srt", "storeInMemory": true, "createTodos": true}

SRT Content Input: {"content": "1\n00:00:02,000 --> 00:00:07,000\nHello world", "storeInMemory": true}

āš ļø IMPORTANT:

  • This returns METADATA ONLY - no actual text content

  • Response is SMALL to avoid context overflow

  • Use get_next_chunk() to retrieve individual chunks

  • Use translate_srt() for actual translation

  • Store chunks in memory for large files to avoid context limits

Input Schema

NameRequiredDescriptionDefault
contentYesSRT file content OR file path to analyze (auto-detected)
createTodosNoCreate individual TODO tasks for each chunk (default: false)
sessionIdNoSession ID for memory storage (optional, auto-generated if not provided)
storeInMemoryNoStore chunks in memory to avoid context limits (default: false)

Input Schema (JSON Schema)

{ "properties": { "content": { "description": "SRT file content OR file path to analyze (auto-detected)", "type": "string" }, "createTodos": { "default": false, "description": "Create individual TODO tasks for each chunk (default: false)", "type": "boolean" }, "sessionId": { "description": "Session ID for memory storage (optional, auto-generated if not provided)", "type": "string" }, "storeInMemory": { "default": false, "description": "Store chunks in memory to avoid context limits (default: false)", "type": "boolean" } }, "required": [ "content" ], "type": "object" }

Other Tools from SRT Translation MCP Server

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/omd0/srt-mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server