Skip to main content
Glama
team-tissis

NoLang MCP Server

by team-tissis

wait_video_generation_and_get_download_url

Monitor video generation progress and retrieve the download URL when complete. Use this tool to track AI-generated videos from text input and access them for download.

Instructions

Polls until video generation completes and returns the download URL.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
argsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
statusYesCurrent status of the video
video_idYesUnique identifier for the video
download_urlNoSigned URL for the finished MP4 file

Implementation Reference

  • The main handler function decorated with @mcp.tool. It polls the API for video status at intervals until completion, failure, or timeout, reporting progress and returning the download URL upon success.
    @mcp.tool(
        name="wait_video_generation_and_get_download_url",
        description="Polls until video generation completes and returns the download URL.",
    )
    async def wait_video_generation_and_get_download_url(
        args: VideoWaitArgs,
        ctx: Context,
    ) -> VideoStatusResult:
        start = time.time()
    
        while time.time() - start < args.max_wait_time:
            try:
                status_response = await nolang_api.get_video_status(args.video_id)
            except httpx.HTTPStatusError as e:
                raise RuntimeError(format_http_error(e)) from e
    
            current_status = VideoStatusEnum(status_response.status)
    
            if current_status == VideoStatusEnum.RUNNING:
                await ctx.report_progress(
                    progress=time.time() - start, total=args.max_wait_time
                )  # notify progress to client
                await asyncio.sleep(args.check_interval)
                continue
    
            if current_status == VideoStatusEnum.COMPLETED:
                await ctx.report_progress(
                    progress=args.max_wait_time, total=args.max_wait_time
                )  # notify progress to client (100%)
                return VideoStatusResult(
                    video_id=args.video_id,
                    status=current_status,
                    download_url=status_response.download_url,
                )
    
            if current_status == VideoStatusEnum.FAILED:
                raise RuntimeError(f"Video generation failed. Video ID: {args.video_id}")
    
            # Unknown status – wait and retry
            await asyncio.sleep(args.check_interval)
    
        raise TimeoutError("Video generation did not complete within the time limit.")
  • Pydantic schema for the tool's input arguments: video_id (required), max_wait_time (default 600s), check_interval (default 10s).
    class VideoWaitArgs(BaseModel):
        """Arguments for waiting for video generation completion."""
    
        model_config = ConfigDict(extra="forbid")
    
        video_id: UUID = Field(
            ...,
            description="Video ID of the generation job",
        )
        max_wait_time: int = Field(
            default=600,
            description="Maximum seconds to wait for generation completion",
            ge=1,
            le=3600,
        )
        check_interval: int = Field(default=10, description="Interval (seconds) to check status", ge=1, le=60)
  • Pydantic schema for the tool's output: video_id, status, and download_url.
    class VideoStatusResult(BaseModel):
        model_config = ConfigDict(extra="allow")
    
        video_id: UUID = Field(..., description="Unique identifier for the video")
        status: VideoStatusEnum = Field(..., description="Current status of the video")
        download_url: Optional[str] = Field(None, description="Signed URL for the finished MP4 file")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions polling behavior and returns a download URL, but lacks critical details: it doesn't specify what happens on timeout (e.g., error handling), whether it's idempotent or safe to retry, potential rate limits, or authentication requirements. For a polling tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action and outcome: 'Polls until video generation completes and returns the download URL.' It wastes no words and directly communicates the tool's core functionality, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a polling tool with potential timeouts and intervals), no annotations, and an output schema (which likely handles return values), the description is minimally adequate. It states the purpose and outcome but lacks details on error conditions, dependencies on sibling tools, or behavioral nuances. The presence of an output schema reduces the need to explain returns, but more context is needed for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the input schema provides no descriptions for parameters. The description doesn't add any parameter-specific information beyond implying a 'video_id' is needed for polling. It doesn't explain the semantics of 'max_wait_time' or 'check_interval', leaving these critical polling parameters undocumented. However, since there's only one top-level parameter ('args'), the baseline is slightly higher, but the description fails to compensate for the low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Polls until video generation completes and returns the download URL.' It specifies the action (polling for completion) and the resource (video generation), and indicates the outcome (returning a download URL). However, it doesn't explicitly differentiate from sibling tools like 'list_generated_videos' or 'generate_video_with_setting', which could help an agent understand when to use this specific polling tool versus others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., that a video generation job must already be started, likely via sibling tools like 'generate_video_with_setting'), nor does it specify scenarios where polling is necessary versus checking status with other tools. This lack of context could lead to misuse or confusion for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/team-tissis/nolang-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server