Skip to main content
Glama
AtlasCloudAI

Atlas Cloud MCP Server (Image / Video / LLM APIs)

by AtlasCloudAI

Generate Video

atlas_generate_video

Submit video generation requests using Atlas Cloud's AI models. Specify model ID and parameters to create videos from text prompts or images, then check results with prediction ID.

Instructions

Generate a video using Atlas Cloud API.

This tool submits the generation request and returns immediately with a prediction ID. Use atlas_get_prediction to check the result later.

IMPORTANT: The "model" parameter requires an exact model ID (e.g., "kling-video/kling-v3.0-standard-text-to-video"). If you don't know the exact model ID, you MUST first call atlas_list_models with type="Video" to find it. Do NOT guess model IDs.

You should also use atlas_get_model_info to see the full parameter list and schema for your chosen video model before calling this tool.

Args:

  • model (string, required): The exact video model ID. Use atlas_list_models to find valid IDs.

  • params (object, required): Model-specific parameters as a JSON object. Parameters vary by model - use atlas_get_model_info to see available params. Common ones include:

    • "prompt" (string): Text description of the video

    • "image_url" (string): Source image for image-to-video models

    • "duration" (number): Video duration in seconds

    • "aspect_ratio" (string): e.g., "16:9", "9:16"

Returns: A prediction ID to check the result with atlas_get_prediction. Video generation typically takes 1-5 minutes.

Examples:

  • model="kling-video/kling-v3.0-standard-text-to-video", params={"prompt": "a rocket launching into space", "duration": 5}

  • model="bytedance/seedance-v1.5-pro-image-to-video", params={"prompt": "camera panning right", "image_url": "https://example.com/photo.jpg"}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesVideo model ID
paramsYesModel-specific parameters as JSON object. Use atlas_get_model_info to see available parameters for your chosen model.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the asynchronous nature (returns prediction ID immediately), typical processing time (1-5 minutes), and workflow dependencies (must call other tools first). While annotations cover basic hints (not read-only, not destructive, not idempotent, open world), the description provides practical implementation details that help the agent use the tool correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose statement, important usage notes, parameter explanations, return information, and examples. Every sentence serves a purpose - no wasted words. The information is front-loaded with critical workflow requirements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of video generation with model-specific parameters and asynchronous processing, the description provides complete context: explains the full workflow (list models → get model info → generate → check prediction), provides parameter guidance, mentions processing time, and gives concrete examples. This adequately compensates for the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage, the description adds significant semantic value: explains that model IDs must be exact and obtained from atlas_list_models, provides common parameter examples (prompt, image_url, duration, aspect_ratio), clarifies that parameters vary by model, and directs users to atlas_get_model_info for full schemas. This goes well beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Generate a video using Atlas Cloud API') and distinguishes it from siblings by mentioning the asynchronous nature and need to use atlas_get_prediction for results. It explicitly differentiates from atlas_generate_image (video vs image) and atlas_quick_generate (which likely has different workflow).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use this tool vs alternatives: must use atlas_list_models first to find model IDs, should use atlas_get_model_info to see parameter schemas, and must use atlas_get_prediction to check results. Also mentions typical processing time (1-5 minutes) which helps set expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AtlasCloudAI/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server