Skip to main content
Glama

generate_video_with_setting

Generate AI-powered videos using predefined settings by providing text, documents, audio, or visual content as input through the NoLang API.

Instructions

Consumes paid credits. Start video generation using your VideoSetting ID. Provide text, pdf_path, pptx_path, audio_path, video_path, or image_paths as required.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
argsYes

Implementation Reference

  • The handler function for the 'generate_video_with_setting' tool. It receives the arguments as VideoGenerationFromSettingArgs and delegates the generation logic to the _generate_video helper function.
    async def generate_video_with_setting( args: VideoGenerationFromSettingArgs, ) -> VideoGenerationResult: return await _generate_video( args.video_setting_id, args.text, args.pdf_path, args.pptx_path, args.audio_path, args.video_path, args.image_paths, )
  • Core helper function implementing the video generation logic. Determines the type of input (text, PDF, PPTX, audio, video, images) and calls the corresponding nolang_api method to start the generation job.
    async def _generate_video( setting: Union[UUID, str, Dict[str, Any]], text: str = "", pdf_path: str = "", pptx_path: str = "", audio_path: str = "", video_path: str = "", image_paths: str = "", ) -> VideoGenerationResult: """Generate a video and return a structured response.""" try: # PDF analysis mode if pdf_path and text: result = await nolang_api.generate_video_with_pdf_and_text(setting, pdf_path, text) # PDF mode elif pdf_path: result = await nolang_api.generate_video_with_pdf(setting, pdf_path) # PPTX mode elif pptx_path: result = await nolang_api.generate_video_with_pptx(setting, pptx_path) # Audio mode elif audio_path: result = await nolang_api.generate_video_with_audio(setting, audio_path) # Video mode elif video_path: result = await nolang_api.generate_video_with_video(setting, video_path) # Text mode (with/without images) elif text: image_files = None if image_paths: image_files = [p.strip() for p in image_paths.split(",") if p.strip()] result = await nolang_api.generate_video_with_text(setting, text, image_files) else: raise ValueError("At least one of text, pdf_path, pptx_path, audio_path or video_path must be provided") return VideoGenerationResult(video_id=result.video_id) except httpx.HTTPStatusError as e: # Surface HTTP errors back to the LLM as a structured object raise RuntimeError(format_http_error(e)) from e except FileNotFoundError as e: raise RuntimeError(str(e)) from e
  • Registration of the tool using the @mcp.tool decorator, specifying the name and description.
    @mcp.tool( name="generate_video_with_setting", description="Consumes paid credits. Start video generation using your VideoSetting ID. Provide text, pdf_path, pptx_path, audio_path, video_path, or image_paths as required.", )
  • Pydantic schema for the input arguments to the tool (inherits common fields from VideoGenerationToolArgs: text, pdf_path, etc.).
    class VideoGenerationFromSettingArgs(VideoGenerationToolArgs): """Arguments for generating video from video setting ID.""" video_setting_id: UUID = Field( ..., description="UUID of VideoSetting to use for generation", )
  • Pydantic schema for the output result of the tool, containing the generated video ID.
    class VideoGenerationResult(BaseModel): model_config = ConfigDict(extra="allow") video_id: UUID = Field(..., description="Unique identifier for the queued video")

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/team-tissis/nolang-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server