Skip to main content
Glama
team-tissis

NoLang MCP Server

by team-tissis

generate_video_with_template

Create videos using existing templates by providing text, documents, audio, or images. This tool generates AI-powered videos through the NoLang API for various content formats.

Instructions

Consumes paid credits. Start video generation using an official template Video ID. Provide text, pdf_path, pptx_path, audio_path, video_path, or image_paths as required.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
argsYes

Implementation Reference

  • The handler function that executes the tool logic: fetches video setting data from the provided video_id and calls the shared _generate_video helper.
    async def generate_video_with_template( args: VideoGenerationFromVideoArgs, ) -> VideoGenerationResult: video_setting_data = await nolang_api.get_video_setting_from_video_id(args.video_id) return await _generate_video( video_setting_data, args.text, args.pdf_path, args.pptx_path, args.audio_path, args.video_path, args.image_paths, )
  • The @mcp.tool decorator registering the tool with FastMCP.
    @mcp.tool( name="generate_video_with_template", description="Consumes paid credits. Start video generation using an official template Video ID. Provide text, pdf_path, pptx_path, audio_path, video_path, or image_paths as required.", )
  • Pydantic schema for the tool's input arguments (inherits common VideoGenerationToolArgs fields like text, pdf_path, etc.).
    class VideoGenerationFromVideoArgs(VideoGenerationToolArgs): """Arguments for generating video from existing video ID.""" video_id: UUID = Field( ..., description="ID of existing video to use as template", )
  • Shared helper function implementing the core video generation logic by dispatching to appropriate nolang_api methods based on input types.
    async def _generate_video( setting: Union[UUID, str, Dict[str, Any]], text: str = "", pdf_path: str = "", pptx_path: str = "", audio_path: str = "", video_path: str = "", image_paths: str = "", ) -> VideoGenerationResult: """Generate a video and return a structured response.""" try: # PDF analysis mode if pdf_path and text: result = await nolang_api.generate_video_with_pdf_and_text(setting, pdf_path, text) # PDF mode elif pdf_path: result = await nolang_api.generate_video_with_pdf(setting, pdf_path) # PPTX mode elif pptx_path: result = await nolang_api.generate_video_with_pptx(setting, pptx_path) # Audio mode elif audio_path: result = await nolang_api.generate_video_with_audio(setting, audio_path) # Video mode elif video_path: result = await nolang_api.generate_video_with_video(setting, video_path) # Text mode (with/without images) elif text: image_files = None if image_paths: image_files = [p.strip() for p in image_paths.split(",") if p.strip()] result = await nolang_api.generate_video_with_text(setting, text, image_files) else: raise ValueError("At least one of text, pdf_path, pptx_path, audio_path or video_path must be provided") return VideoGenerationResult(video_id=result.video_id) except httpx.HTTPStatusError as e: # Surface HTTP errors back to the LLM as a structured object raise RuntimeError(format_http_error(e)) from e except FileNotFoundError as e: raise RuntimeError(str(e)) from e

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/team-tissis/nolang-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server