Skip to main content
Glama
team-tissis

NoLang MCP Server

by team-tissis

generate_video_with_setting

Generate AI-powered videos using predefined settings by providing text, documents, audio, video, or images as input through the NoLang API.

Instructions

Consumes paid credits. Start video generation using your VideoSetting ID. Provide text, pdf_path, pptx_path, audio_path, video_path, or image_paths as required.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
argsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
video_idYesUnique identifier for the queued video

Implementation Reference

  • The main handler function for the 'generate_video_with_setting' MCP tool. It receives arguments via VideoGenerationFromSettingArgs and delegates to the _generate_video helper to perform the actual API call based on provided inputs.
    async def generate_video_with_setting(
        args: VideoGenerationFromSettingArgs,
    ) -> VideoGenerationResult:
        return await _generate_video(
            args.video_setting_id,
            args.text,
            args.pdf_path,
            args.pptx_path,
            args.audio_path,
            args.video_path,
            args.image_paths,
        )
  • Registration of the 'generate_video_with_setting' tool using the FastMCP @mcp.tool decorator.
    @mcp.tool(
        name="generate_video_with_setting",
        description="Consumes paid credits. Start video generation using your VideoSetting ID. Provide text, pdf_path, pptx_path, audio_path, video_path, or image_paths as required.",
    )
  • Core helper function implementing the video generation logic by routing to appropriate nolang_api methods based on input types (text, PDF, PPTX, audio, video, images).
    async def _generate_video(
        setting: Union[UUID, str, Dict[str, Any]],
        text: str = "",
        pdf_path: str = "",
        pptx_path: str = "",
        audio_path: str = "",
        video_path: str = "",
        image_paths: str = "",
    ) -> VideoGenerationResult:
        """Generate a video and return a structured response."""
    
        try:
            # PDF analysis mode
            if pdf_path and text:
                result = await nolang_api.generate_video_with_pdf_and_text(setting, pdf_path, text)
            # PDF mode
            elif pdf_path:
                result = await nolang_api.generate_video_with_pdf(setting, pdf_path)
            # PPTX mode
            elif pptx_path:
                result = await nolang_api.generate_video_with_pptx(setting, pptx_path)
            # Audio mode
            elif audio_path:
                result = await nolang_api.generate_video_with_audio(setting, audio_path)
            # Video mode
            elif video_path:
                result = await nolang_api.generate_video_with_video(setting, video_path)
            # Text mode (with/without images)
            elif text:
                image_files = None
                if image_paths:
                    image_files = [p.strip() for p in image_paths.split(",") if p.strip()]
                result = await nolang_api.generate_video_with_text(setting, text, image_files)
            else:
                raise ValueError("At least one of text, pdf_path, pptx_path, audio_path or video_path must be provided")
    
            return VideoGenerationResult(video_id=result.video_id)
        except httpx.HTTPStatusError as e:
            # Surface HTTP errors back to the LLM as a structured object
            raise RuntimeError(format_http_error(e)) from e
        except FileNotFoundError as e:
            raise RuntimeError(str(e)) from e
  • Input schema: VideoGenerationToolArgs (base class with common fields) and VideoGenerationFromSettingArgs (specific video_setting_id field) using Pydantic BaseModel for validation.
    class VideoGenerationToolArgs(BaseModel):
        """Base arguments for video generation tools."""
    
        model_config = ConfigDict(extra="forbid")
    
        text: str = Field(
            default="",
            description="Input text for query modes or slideshow_analysis mode",
        )
        pdf_path: str = Field(
            default="",
            description="PDF file path for slideshow modes",
            examples=["/path/to/presentation.pdf"],
        )
        pptx_path: str = Field(
            default="",
            description="PPTX file path for slideshow modes",
            examples=["/path/to/presentation.pptx"],
        )
        audio_path: str = Field(
            default="",
            description="Audio file path for audio_speech mode",
            examples=["/path/to/audio.mp3"],
        )
        video_path: str = Field(
            default="",
            description="Video file path for audio_video mode",
            examples=["/path/to/video.mp4"],
        )
        image_paths: str = Field(
            default="",
            description="Comma-separated image file paths for query modes",
            examples=["image1.jpg,image2.png,image3.jpeg"],
        )
    
    
    class VideoGenerationFromSettingArgs(VideoGenerationToolArgs):
        """Arguments for generating video from video setting ID."""
    
        video_setting_id: UUID = Field(
            ...,
            description="UUID of VideoSetting to use for generation",
        )
  • Output schema: VideoGenerationResult defining the video_id returned upon successful generation request.
    class VideoGenerationResult(BaseModel):
        model_config = ConfigDict(extra="allow")
    
        video_id: UUID = Field(..., description="Unique identifier for the queued video")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'Consumes paid credits' which is valuable cost information, but doesn't address other critical behaviors: whether this is an asynchronous operation (implied by the existence of 'wait_video_generation_and_get_download_url'), what permissions are needed, rate limits, error conditions, or what the output contains. For a paid service with complex inputs, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that each serve distinct purposes: cost warning and input guidance. It's front-loaded with the most important information (paid credits). However, the second sentence could be structured more clearly to separate the required 'video_setting_id' from the optional media inputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of video generation, multiple input types, paid service nature, and existence of an output schema, the description is minimally adequate. It covers cost and input types but misses critical context about the asynchronous nature (implied by sibling tools), error handling, and workflow integration. The output schema existence reduces the need to describe return values, but more operational context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description lists the available input types (text, pdf_path, pptx_path, audio_path, video_path, image_paths) which provides semantic context beyond the single 'args' parameter shown in the schema. However, it doesn't explain the relationships between these inputs and the 'video_setting_id' or how they interact. With 0% schema description coverage, the description adds significant value but doesn't fully compensate for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Start video generation using your VideoSetting ID', which provides a clear verb ('Start video generation') and resource ('VideoSetting ID'). However, it doesn't distinguish this from its sibling 'generate_video_with_template' - both appear to initiate video generation but with different configuration sources. The purpose is clear but sibling differentiation is lacking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'Provide text, pdf_path, pptx_path, audio_path, video_path, or image_paths as required' but doesn't explain which scenarios require which inputs or when to choose this over 'generate_video_with_template'. There's no mention of prerequisites, timing considerations, or workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/team-tissis/nolang-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server