FFmpeg Micro
Server Details
FFmpeg Micro MCP Server. Transcode videos from n8n or Make using FFmpeg in the cloud. Code+Docs: https://github.com/javidjamae/ffmpeg-micro-mcp/
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 6 of 6 tools scored. Lowest: 3.5/5.
Each tool has a clearly distinct purpose with no overlap: cancel_transcode cancels jobs, get_download_url provides signed URLs, get_transcode fetches job state, list_transcodes lists jobs, transcode_and_wait is a convenience wrapper, and transcode_video creates jobs. The descriptions explicitly differentiate them, preventing misselection.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., cancel_transcode, get_download_url, transcode_video). This uniformity makes the set predictable and easy to understand, with no deviations in style.
With 6 tools, this server is well-scoped for video transcoding workflows. It covers job creation, status tracking, listing, cancellation, and output retrieval without being overly complex or sparse, making each tool essential for the domain.
The toolset provides complete coverage for video transcoding: create (transcode_video), read (get_transcode, list_transcodes), update (cancel_transcode), and delete (implied via cancellation), plus convenience (transcode_and_wait) and output access (get_download_url). There are no obvious gaps, enabling full agent workflows.
Available Tools
6 toolscancel_transcodeCancel TranscodeAInspect
Cancel a queued or processing transcode job. Jobs that are already completed, failed, or cancelled cannot be cancelled and return an error.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Transcode job UUID to cancel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a mutation tool (implied by 'Cancel'), specifies error conditions (jobs in certain states cannot be cancelled), and indicates it may return an error. This adds valuable context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a clarifying constraint in the second. Both sentences earn their place by providing essential usage and error information without any waste, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a mutation with error conditions), no annotations, and no output schema, the description is reasonably complete. It covers purpose, usage constraints, and behavioral traits, though it could enhance completeness by mentioning potential side effects or response formats, slightly lowering the score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'id' parameter documented as 'Transcode job UUID to cancel'. The description does not add further meaning beyond this, such as format examples or validation rules, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Cancel') and resource ('queued or processing transcode job'), distinguishing it from siblings like 'list_transcodes' (list) or 'transcode_video' (create). It precisely defines what the tool does without being tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool (for queued or processing jobs) and when not to use it (for completed, failed, or cancelled jobs, which return errors). However, it does not explicitly mention alternatives like 'get_transcode' for checking job status or compare with other siblings, keeping it at a 4.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_download_urlGet Download URLAInspect
Generate a short-lived (10 minute) signed HTTPS URL for a completed transcode's output file. The job must be in completed status. Use this instead of the output_url field on the job object, which is a gs:// URL that HTTP clients cannot fetch directly.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Completed transcode job UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the URL is 'short-lived (10 minute)', 'signed', and 'HTTPS'. It also specifies the prerequisite condition ('job must be in `completed` status'). However, it doesn't mention potential error conditions, rate limits, or authentication requirements, which would be helpful for a complete behavioral picture.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences. The first sentence states the core purpose and key constraints. The second sentence provides crucial usage guidance by contrasting with an alternative. Every word serves a clear purpose with zero redundancy, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description provides substantial context: purpose, constraints, behavioral traits, and usage guidance. It effectively explains what the tool does and when to use it. The main gap is the lack of information about return values or error conditions, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'id' clearly documented as 'Completed transcode job UUID'. The description adds no additional parameter semantics beyond what the schema provides, but it does reinforce the 'completed' status requirement which relates to parameter validity. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('generate a short-lived signed HTTPS URL') and identifies the target resource ('completed transcode's output file'). It distinguishes itself from the sibling 'get_transcode' by focusing on URL generation rather than job status retrieval, and explicitly contrasts with the 'output_url' field on job objects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: it specifies when to use this tool ('for a completed transcode's output file'), when not to use it (if the job is not in 'completed' status), and names a clear alternative (the 'output_url' field on job objects). It also explains why this alternative is insufficient ('gs:// URL that HTTP clients cannot fetch directly').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transcodeGet TranscodeAInspect
Fetch the current state of a single transcode job by ID, including status (queued/processing/completed/failed) and output_url when completed.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Transcode job UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool fetches state, including status and output_url, which helps understand its read-only nature and output format. However, it doesn't cover behavioral aspects like error handling, rate limits, authentication needs, or whether it's idempotent, leaving gaps for a tool that queries job status.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Fetch the current state...') and includes key details like status and output_url. There is no wasted verbiage, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no nested objects) and 100% schema coverage, the description is minimally adequate. However, with no annotations and no output schema, it fails to fully compensate by explaining return values beyond status and output_url, such as error responses or additional metadata, leaving the agent with incomplete context for reliable use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'id' documented as 'Transcode job UUID'. The description adds no additional meaning beyond this, such as format examples or validation rules. Since the schema fully covers the parameter, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Fetch') and resource ('current state of a single transcode job'), specifying it retrieves status and output_url. However, it doesn't explicitly differentiate from siblings like 'list_transcodes' (bulk retrieval) or 'transcode_and_wait' (initiation with waiting), leaving some ambiguity about when to choose this specific tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'by ID' and status details, suggesting it's for checking a specific job's progress. However, it lacks explicit guidance on when to use this versus alternatives like 'list_transcodes' for multiple jobs or 'transcode_and_wait' for automated completion, and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_transcodesList TranscodesAInspect
List transcode jobs for the authenticated account, with optional filters for status and time range. Paginated (default page 1, limit 20).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-indexed page number | |
| limit | No | Page size (max 100) | |
| since | No | ISO timestamp — only return jobs created at/after this time | |
| until | No | ISO timestamp — only return jobs created at/before this time | |
| status | No | Filter by job status |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: authentication requirement ('authenticated account'), pagination behavior (defaults and limits), and filtering capabilities. It doesn't mention rate limits, error handling, or response format, but covers essential operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose and efficiently covers key details (filters, pagination). Every word earns its place with zero redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only list tool with 5 parameters and no output schema, the description is reasonably complete. It covers authentication, filtering, and pagination. However, it lacks details on response format (e.g., structure of returned jobs) and error scenarios, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description adds marginal value by mentioning 'optional filters for status and time range' and pagination defaults, but doesn't provide additional syntax or format details beyond what's in the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('transcode jobs'), specifies the scope ('for the authenticated account'), and distinguishes it from siblings like 'get_transcode' (singular) and 'transcode_and_wait' (creation). It's specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing jobs with filters, but doesn't explicitly state when to use this tool versus alternatives like 'get_transcode' (for a single job) or 'transcode_and_wait' (for creating/processing). It provides some context but lacks clear guidance on tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transcode_and_waitTranscode and WaitAInspect
One-shot convenience tool: creates a transcode job, polls until it reaches a terminal state (completed/failed/cancelled) or the timeout expires, and returns the final job plus a signed download URL if completed. Use this when you want the full transcode in one step without managing polling yourself.
| Name | Required | Description | Default |
|---|---|---|---|
| inputs | Yes | One to ten input videos. Multiple inputs are concatenated in order. | |
| preset | No | Simple mode — quality/resolution presets. Ignored if `options` is provided. | |
| options | No | Advanced mode — raw FFmpeg options or virtual options. Overrides `preset`. | |
| outputFormat | Yes | Container format for the output file | |
| timeoutSeconds | No | Max time to wait for the job to complete, in seconds. Default 600 (10 min). Max 1800. | |
| pollIntervalSeconds | No | Polling interval in seconds. Default 3. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's polling behavior, timeout handling, and return values (final job status plus signed download URL if completed). However, it doesn't mention authentication requirements, rate limits, or error handling specifics, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly concise with two sentences that each earn their place: the first explains the tool's behavior and value proposition, the second provides clear usage guidance. No wasted words, front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 6 parameters, nested objects, and no output schema, the description provides good context about the tool's polling behavior and return values. However, it doesn't explain what the 'final job' object contains or provide examples of terminal states beyond naming them, leaving some gaps in understanding the complete workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, but it does provide context about the tool's overall behavior with parameters (like polling with timeoutSeconds and pollIntervalSeconds). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('creates a transcode job, polls until it reaches a terminal state') and distinguishes it from sibling tools by explaining it's a 'one-shot convenience tool' that handles polling automatically, unlike the more granular 'transcode_video' and 'get_transcode' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('when you want the full transcode in one step without managing polling yourself') and implies when not to use it (when you need to manage polling manually or use other operations like cancellation, listing, or getting download URLs separately, which are covered by sibling tools).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transcode_videoTranscode VideoAInspect
Create a video transcode job on FFmpeg Micro. Accepts one or more input videos (gs:// or https://) and an output format. Returns immediately with a queued job — use get_transcode, list_transcodes, or transcode_and_wait to follow progress.
| Name | Required | Description | Default |
|---|---|---|---|
| inputs | Yes | One to ten input videos. Multiple inputs are concatenated in order. | |
| preset | No | Simple mode — quality/resolution presets. Ignored if `options` is provided. | |
| options | No | Advanced mode — raw FFmpeg options or virtual options. Overrides `preset`. | |
| outputFormat | Yes | Container format for the output file |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the asynchronous nature ('Returns immediately with a queued job'), the need for follow-up actions, and the input constraints ('gs:// or https://'). However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions that would be helpful for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly front-loaded with the core purpose in the first sentence, followed by essential behavioral context. Both sentences earn their place by providing critical information about the tool's asynchronous nature and sibling relationships. There is zero wasted verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description does well by explaining the asynchronous behavior and follow-up requirements. However, it could provide more context about error handling, job queuing behavior, or what 'queued job' means in practice. The completeness is good but not exhaustive for a tool that creates jobs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it mentions the input URL formats and output format but doesn't provide additional context about parameter interactions or usage patterns. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create a video transcode job'), identifies the target system ('on FFmpeg Micro'), and specifies the resource ('one or more input videos'). It distinguishes from siblings by mentioning alternative tools for progress tracking, making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: it states this tool 'Returns immediately with a queued job' and explicitly names three sibling tools ('get_transcode', 'list_transcodes', 'transcode_and_wait') to use for following progress. This gives clear context for when to use this asynchronous tool versus synchronous alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!