Future Video Studio
Server Details
Create and manage cinematic AI video renders through the Future Video Studio Agent API.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ariadne-coil/fvs-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 7 of 7 tools scored. Lowest: 2.8/5.
Each tool has a clear, distinct purpose: submit, cancel, check status (standard vs paid), download, create paid quote, and provide an example request. No overlaps.
All tools follow the consistent pattern fvs_verb_noun (e.g., fvs_cancel_render, fvs_get_render_status), making the set predictable.
Seven tools cover the core render lifecycle (submit, cancel, status, download) plus paid workflow and example; not excessive nor lacking.
The set covers submission, cancellation, status (two variants), download, and paid quoting. Missing an update tool, but that is acceptable for a render service.
Available Tools
7 toolsfvs_cancel_renderAInspect
Cancel a Future Video Studio render job.
Provide either `project_id` or the full `cancel_url` returned by
fvs_submit_render.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | ||
| base_url | No | ||
| cancel_url | No | ||
| project_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies cancellation is a destructive action but does not explicitly state irreversibility, consequences, required permissions, or rate limits. The description is minimal for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loading the purpose and then providing necessary parameter usage. No redundant information; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (cancellation) and the presence of an output schema, the description should mention return behavior, error conditions, and prerequisites (e.g., a render must exist). It links to fvs_submit_render but lacks broader context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only explains two out of four parameters (project_id, cancel_url). The api_key and base_url parameters are undocumented, leaving the agent uninformed about their purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it cancels a Future Video Studio render job. The verb 'Cancel' and resource 'render job' are specific. Among siblings like submit and get status, cancel is distinct, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on using either 'project_id' or the 'cancel_url' from fvs_submit_render. It does not discuss when not to use the tool or compare to siblings, but the parameter alternatives are clearly explained.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fvs_create_paid_render_quoteAInspect
Create a no-account Link payment quote for an FVS render.
The backend returns HTTP 402 payment details as data: `payment_url`,
`status_url`, `claim_token`, `amount_cents`, `currency`, and a raw
`www_authenticate` challenge. Pay `payment_url` with Link's MPP flow, then
poll with fvs_get_paid_render_status. Local file uploads are not available
in paid quote mode; use public HTTPS `upload_urls` when assets are needed.| Name | Required | Description | Default |
|---|---|---|---|
| request | Yes | ||
| base_url | No | ||
| upload_urls | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description does a good job disclosing behavioral traits: the backend returns HTTP 402 with payment details, and the tool requires payment via Link. It also notes that local file uploads are not available in paid quote mode. Missing details on error handling or idempotency, but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences that efficiently convey the purpose, workflow, and constraints. Every sentence adds value, and the critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description adequately covers the quote creation context and integrates with the sibling poll tool. However, it misses details on the base_url parameter and the structure of the request object, which may affect completeness for complex use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains the usage of upload_urls but does not describe the required request object or the base_url parameter. The description adds limited meaning beyond what the schema provides, leaving key parameters under-documented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a no-account Link payment quote for an FVS render, specifying the backend returns payment details. It distinguishes itself from sibling tools like fvs_submit_render and fvs_get_paid_render_status by focusing on quote creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear workflow: create quote, pay via Link's MPP flow, then poll with fvs_get_paid_render_status. It also indicates when to use upload_urls for assets, implying local uploads are not available. However, it does not explicitly state when not to use this tool in favor of alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fvs_download_final_videoCInspect
Download a completed render from its signed final_video_url.
| Name | Required | Description | Default |
|---|---|---|---|
| output_path | Yes | ||
| final_video_url | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as download duration, disk writing behavior, or error handling for invalid URLs. The brief description leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no redundancy, but it may be overly terse, missing important details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low schema coverage and lack of annotations, the description is insufficiently complete. It does not address parameter semantics or behavioral context, even though an output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate but does not explain the parameters. 'output_path' is ambiguous (e.g., local path? format?) and 'final_video_url' is only implicitly described.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Download' and identifies the resource as 'a completed render from its signed final_video_url', clearly distinguishing it from sibling tools which handle cancellation, quoting, status, and submission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, no prerequisites (e.g., render must be completed), and no exclusions. The description only implies usage without any context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fvs_example_render_requestAInspect
Return a minimal scene render request agents can adapt.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It correctly indicates a read-only operation (returning a request) with no side effects, though it does not detail what 'minimal' entails.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence efficiently conveys the tool's purpose without excess words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and an output schema likely defines the return format, the description is sufficient for its simplicity, though it could hint at how to use the output with sibling tools like fvs_submit_render.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so baseline is 4. The description adds no parameter info, which is acceptable as none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a minimal scene render request, distinguishing it from siblings that perform actions like submitting, canceling, or checking status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the output is for adaptation by agents, but lacks explicit when-to-use or when-not-to-use guidance, nor does it mention alternatives like directly using a full render request.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fvs_get_paid_render_statusAInspect
Check a no-account paid render created with fvs_create_paid_render_quote.
Provide the full `status_url` or pass both `quote_id` and `claim_token`.| Name | Required | Description | Default |
|---|---|---|---|
| base_url | No | ||
| quote_id | No | ||
| status_url | No | ||
| claim_token | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It indicates a non-destructive check but does not disclose details such as whether the operation consumes the claim_token, error responses, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with front-loaded information. No unnecessary words; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists, return value details are not required. However, the description lacks context about possible status states, prerequisites, and error handling, making it minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description clarifies the relationship between status_url, quote_id, and claim_token, indicating two alternative parameter groups. However, base_url is not mentioned, and with 0% schema description coverage, the compensation is partial.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'check' and the resource 'no-account paid render', distinguishing it from sibling tools like fvs_get_render_status by specifying it is for renders created with fvs_create_paid_render_quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides two alternative ways to use the tool: provide the full status_url or pass both quote_id and claim_token. It implicitly restricts usage to no-account paid renders, but does not explicitly exclude other contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fvs_get_render_statusAInspect
Check a Future Video Studio render job.
Provide either `project_id` or the full `status_url` returned by
fvs_submit_render.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | ||
| base_url | No | ||
| project_id | No | ||
| status_url | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description simply states 'Check a ... render job' without explicit mention of side effects or safety. For a read-like operation, it's minimally adequate but lacks detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and the tool's simplicity, the description covers the core functionality but omits important context about authentication parameters (`api_key`, `base_url`) that are in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description adds meaning for `project_id` and `status_url` (either/or), but ignores `api_key` and `base_url` which are not explained. With 0% schema coverage, more compensation is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action ('Check') and the resource ('Future Video Studio render job'), differentiating it from siblings like fvs_submit_render and fvs_cancel_render.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance to use either `project_id` or `status_url` from fvs_submit_render, implying proper usage context. Could be more explicit about alternatives or when not to use, but overall clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fvs_submit_renderAInspect
Submit a Future Video Studio render job through the FVS Agent API.
Pass the render payload as `request`. For uploads, pass local file paths in
`upload_files`; every `request.assets[].filename` must match one uploaded
file basename. Prefer credentials from FVS_AGENT_API_KEY instead of passing
api_key through the tool call.| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | ||
| request | Yes | ||
| base_url | No | ||
| upload_urls | No | ||
| upload_files | No | ||
| poll_until_complete | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description mentions authentication preference and upload handling but does not disclose side effects, error behavior, or whether polling modifies state. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five lines, front-loaded purpose, no redundant text. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers core submission and upload, but given complexity (6 params, nested objects, output schema), missing details on 'base_url', 'upload_urls', and polling behavior. Adequate for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, but description adds meaning for 'request' and 'upload_files', and advises on 'api_key'. 'base_url' and 'upload_urls' are unexplained. Partial compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Submit a Future Video Studio render job through the FVS Agent API,' specifying the action (submit), resource (render job), and context. It differentiates from sibling tools like cancel or status checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance: pass payload as 'request', match upload files to filenames, prefer environment variable for API key. Lacks explicit exclusions but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!