Skip to main content
Glama

sora_generate_video

Generate AI videos from text prompts. Describe scenes, actions, and styles to create video content with configurable duration, resolution, and orientation.

Instructions

Generate an AI video from a text prompt using Sora.

This is the primary way to create videos - describe what you want and Sora
will generate a video matching your description.

Use this when:
- You want to generate a video from a text description
- You don't have reference images
- You want creative AI-generated video content

For image-to-video generation, use sora_generate_video_from_image instead.
For character-based video generation, use sora_generate_video_with_character.

Returns:
    Task ID and generated video information including URLs and state.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesDescription of the video to generate. Be descriptive about the scene, action, style, and mood. Examples: 'A cat running on the river', 'A futuristic cityscape with flying cars at sunset', 'A person walking through a snowy forest'
modelNoSora model version. 'sora-2' is the standard model. 'sora-2-pro' offers higher quality and supports 25-second videos.sora-2
sizeNoVideo resolution. 'small' for lower resolution, 'large' for higher resolution.large
durationNoVideo duration in seconds. Options: 10, 15, or 25 (25 only available with sora-2-pro model).
orientationNoVideo orientation. 'landscape' for horizontal (16:9), 'portrait' for vertical (9:16).landscape

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return type ('Task ID and generated video information'), hinting at asynchronous behavior. However, it fails to clarify how this differs from the sibling 'sora_generate_video_async' or mention error handling, rate limits, or content policies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose statement, usage conditions, alternative recommendations, and return value. Information is front-loaded and every sentence serves a distinct purpose. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately summarizes the return value without over-specifying. It covers the tool's role in the ecosystem via sibling mentions. Minor gap: does not clarify whether this blocks until completion given the existence of the '_async' variant, which would help the agent decide between sync/async variants.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description reinforces that the prompt should 'describe what you want' but does not add syntax details, validation rules, or semantic constraints beyond what the schema already provides with its examples and enum descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb-resource pair ('Generate an AI video from a text prompt using Sora') and explicitly distinguishes from siblings by contrasting with image-to-video and character-based alternatives. It clearly positions this as the 'primary' text-to-video method.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'Use this when:' section with three specific scenarios. Explicitly names alternatives: 'use sora_generate_video_from_image instead' and 'use sora_generate_video_with_character', providing clear guidance on when to select this tool versus its siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AceDataCloud/MCPSora'

If you have feedback or need assistance with the MCP directory API, please join our Discord server