Skip to main content
Glama

SRT Translation MCP Server

by omd0

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
parse_srt

Parse SRT file content and return structured data

write_srt

Write SRT data to file format

detect_conversations

šŸš€ CHUNK-BASED TRANSLATION WORKFLOW INSTRUCTIONS šŸš€

šŸ“‹ OVERVIEW: This tool analyzes SRT files and creates intelligent chunks for efficient translation. It returns METADATA ONLY - use get_next_chunk() and translate_srt() for actual content.

šŸ” WHAT IT DOES:

  • SMART INPUT: Auto-detects file paths vs SRT content

  • Creates small chunks (1-3 subtitles each) optimized for AI processing

  • Detects languages (Arabic, English, Spanish, French) per chunk

  • Identifies speakers and conversation boundaries

  • Provides translation priority rankings (high/medium/low)

  • Stores chunks in memory to avoid context limits

  • Creates individual TODO tasks for tracking progress

šŸ“Š WHAT IT RETURNS (SMALL RESPONSE):

  • chunkCount: Total number of chunks created

  • totalDuration: File duration in milliseconds

  • languageDistribution: Language counts (e.g., {"ar": 45, "en": 12})

  • previewChunk: Preview of first chunk metadata only

  • sessionId: For retrieving chunks later

  • message: Instructions for next steps

  • todos: Individual tasks for each chunk

šŸŽÆ RECOMMENDED WORKFLOW:

  1. Call detect_conversations with storeInMemory=true

  2. Review metadata to understand file structure (SMALL RESPONSE)

  3. Use get_next_chunk to process chunks one by one

  4. Use translate_srt() for actual translation

  5. Track progress with todo_management

šŸ’” EXAMPLES:

File Path Input: {"content": "/path/to/file.srt", "storeInMemory": true, "createTodos": true}

SRT Content Input: {"content": "1\n00:00:02,000 --> 00:00:07,000\nHello world", "storeInMemory": true}

āš ļø IMPORTANT:

  • This returns METADATA ONLY - no actual text content

  • Response is SMALL to avoid context overflow

  • Use get_next_chunk() to retrieve individual chunks

  • Use translate_srt() for actual translation

  • Store chunks in memory for large files to avoid context limits

get_next_chunk

šŸ“¦ CHUNK RETRIEVAL FOR TRANSLATION WORKFLOW šŸ“¦

šŸŽÆ PURPOSE: Retrieves the next chunk from memory for sequential processing. Use this after detect_conversations with storeInMemory=true.

šŸ”„ HOW IT WORKS:

  • Automatically tracks which chunk to return next

  • Returns actual chunk data with subtitle text content

  • Advances to next chunk automatically

  • Returns null when all chunks processed

šŸ“„ PARAMETERS:

  • sessionId: Session ID from detect_conversations response

šŸ“¤ RETURNS:

  • chunk: Complete chunk data with subtitle text (or null if done)

  • chunkIndex: Current chunk number (0-based)

  • totalChunks: Total chunks available

  • hasMore: Boolean indicating if more chunks exist

  • message: Status message

šŸ’” USAGE PATTERN:

  1. Call detect_conversations with storeInMemory=true

  2. Get sessionId from response

  3. Call get_next_chunk repeatedly until hasMore=false

  4. Process each chunk for translation

  5. Use translate_srt() on individual chunks

šŸ“‹ EXAMPLE: {"sessionId": "srt-session-123456789"}

āš ļø NOTE:

  • Each call advances to the next chunk automatically

  • Store sessionId from detect_conversations response

  • Use this for chunk-by-chunk processing of large files

translate_srt

šŸŒ SRT TRANSLATION HELPER TOOL šŸŒ

🚨 CRITICAL: THIS IS A HELPER TOOL ONLY - AI DOES THE TRANSLATION! 🚨

šŸŽÆ PURPOSE: This tool helps prepare SRT content for AI translation but DOES NOT translate text itself. The AI assistant must perform the actual translation work.

šŸ“ WHAT IT DOES:

  • Parses SRT content and extracts subtitle text for AI translation

  • Preserves timing and formatting structure

  • Returns structured data for AI to translate

  • Provides context and metadata for better translation

āŒ WHAT IT DOES NOT DO:

  • āŒ Does NOT translate text automatically

  • āŒ Does NOT return translated content

  • āŒ Does NOT perform any AI translation

āœ… WHAT IT RETURNS:

  • Structured SRT data with original text

  • Timing and formatting information

  • Translation context and metadata

  • Ready-to-translate format for AI

šŸ”„ RECOMMENDED WORKFLOW:

  1. Use detect_conversations to analyze file structure

  2. Use get_next_chunk to get individual chunks

  3. Use translate_srt to prepare chunk for AI translation

  4. AI assistant translates the text content

  5. AI assistant combines results into final SRT file

šŸ’” USAGE PATTERNS:

Prepare Full File for Translation: {"content": "full SRT content", "targetLanguage": "es", "sourceLanguage": "en"}

Prepare Individual Chunk for Translation: {"content": "chunk SRT content", "targetLanguage": "es", "sourceLanguage": "en"}

āš ļø CRITICAL INSTRUCTIONS:

  • This tool ONLY prepares content for AI translation

  • AI assistant must do the actual text translation

  • Use this to get structured data, then translate with AI

  • Return format is ready for AI processing

todo_management

Manage tasks for SRT processing workflows.

WHAT IT DOES:

  • Create, update, and track tasks during SRT processing

  • Monitor progress across different processing stages

  • Manage task priorities and dependencies

ACTIONS:

  • create: Create a new task

  • update: Update task status

  • complete: Mark task as completed

  • list: List all tasks

  • get_status: Get overall task status

TASK TYPES:

  • srt_parse: Parse and validate SRT file

  • conversation_detect: Detect conversation chunks

  • chunk_optimize: Optimize chunks for AI processing

  • ai_process: Process with AI model

  • translate: Translate content

  • quality_check: Quality assurance

  • output_generate: Generate final output

EXAMPLE USAGE:

  1. Create task: {"action": "create", "taskType": "srt_parse", "title": "Parse SRT file", "priority": "high"}

  2. Update status: {"action": "update", "taskId": "task-123", "status": "completed"}

  3. List tasks: {"action": "list"}

  4. Get status: {"action": "get_status"}

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/omd0/srt-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server