Skip to main content
Glama

check_video_status

Monitor the progress of AI video generation tasks by providing the task ID to track completion status and retrieve results.

Instructions

Check the status of a video generation task

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_idYesThe task ID returned from generate_video or generate_image_to_video

Implementation Reference

  • The main tool handler for 'check_video_status'. It calls KlingClient.getTaskStatus, formats the status message including video URLs if succeeded, and returns a text response.
    case 'check_video_status': { const status = await klingClient.getTaskStatus(args.task_id as string); let statusText = `Task ID: ${status.task_id}\nStatus: ${status.task_status}`; if (status.task_status_msg) { statusText += `\nMessage: ${status.task_status_msg}`; } if (status.task_status === 'succeed' && status.task_result?.videos) { statusText += '\n\nGenerated Videos:'; status.task_result.videos.forEach((video, index) => { statusText += `\n\nVideo ${index + 1}:`; statusText += `\n- URL: ${video.url}`; statusText += `\n- Duration: ${video.duration}`; statusText += `\n- Aspect Ratio: ${video.aspect_ratio}`; }); statusText += '\n\nNote: Videos will be cleared after 30 days for security.'; } return { content: [ { type: 'text', text: statusText, }, ], }; }
  • The input schema definition for the 'check_video_status' tool, specifying the required 'task_id' parameter.
    { name: 'check_video_status', description: 'Check the status of a video generation task', inputSchema: { type: 'object', properties: { task_id: { type: 'string', description: 'The task ID returned from generate_video or generate_image_to_video', }, }, required: ['task_id'], }, },
  • src/index.ts:65-465 (registration)
    The tool is registered by including it in the TOOLS array, which is returned by the ListTools handler.
    const TOOLS: Tool[] = [ { name: 'generate_video', description: 'Generate a video from text prompt using Kling AI', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'Text prompt describing the video to generate (max 2500 characters)', }, negative_prompt: { type: 'string', description: 'Text describing what to avoid in the video (optional, max 2500 characters)', }, model_name: { type: 'string', enum: ['kling-v1', 'kling-v1.5', 'kling-v1.6', 'kling-v2-master'], description: 'Model version to use (default: kling-v2-master)', }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1'], description: 'Video aspect ratio (default: 16:9)', }, duration: { type: 'string', enum: ['5', '10'], description: 'Video duration in seconds (default: 5)', }, mode: { type: 'string', enum: ['standard', 'professional'], description: 'Video generation mode (default: standard)', }, cfg_scale: { type: 'number', description: 'Creative freedom scale 0-1 (0=more creative, 1=more adherent to prompt, default: 0.5)', minimum: 0, maximum: 1, }, camera_control: { type: 'object', description: 'Camera movement settings for V2 models', properties: { type: { type: 'string', enum: ['simple', 'down_back', 'forward_up', 'right_turn_forward', 'left_turn_forward'], description: 'Camera movement type', }, config: { type: 'object', description: 'Camera movement configuration (only for "simple" type)', properties: { horizontal: { type: 'number', description: 'Horizontal movement [-10, 10]', minimum: -10, maximum: 10, }, vertical: { type: 'number', description: 'Vertical movement [-10, 10]', minimum: -10, maximum: 10, }, pan: { type: 'number', description: 'Pan rotation [-10, 10]', minimum: -10, maximum: 10, }, tilt: { type: 'number', description: 'Tilt rotation [-10, 10]', minimum: -10, maximum: 10, }, roll: { type: 'number', description: 'Roll rotation [-10, 10]', minimum: -10, maximum: 10, }, zoom: { type: 'number', description: 'Zoom [-10, 10]', minimum: -10, maximum: 10, }, }, }, }, }, }, required: ['prompt'], }, }, { name: 'generate_image_to_video', description: 'Generate a video from an image using Kling AI', inputSchema: { type: 'object', properties: { image_url: { type: 'string', description: 'URL of the starting image', }, image_tail_url: { type: 'string', description: 'URL of the ending image (optional)', }, prompt: { type: 'string', description: 'Text prompt describing the motion and transformation', }, negative_prompt: { type: 'string', description: 'Text describing what to avoid in the video (optional)', }, model_name: { type: 'string', enum: ['kling-v1', 'kling-v1.5', 'kling-v1.6', 'kling-v2-master'], description: 'Model version to use (default: kling-v2-master)', }, duration: { type: 'string', enum: ['5', '10'], description: 'Video duration in seconds (default: 5)', }, mode: { type: 'string', enum: ['standard', 'professional'], description: 'Video generation mode (default: standard)', }, cfg_scale: { type: 'number', description: 'Creative freedom scale 0-1 (default: 0.5)', minimum: 0, maximum: 1, }, }, required: ['image_url', 'prompt'], }, }, { name: 'check_video_status', description: 'Check the status of a video generation task', inputSchema: { type: 'object', properties: { task_id: { type: 'string', description: 'The task ID returned from generate_video or generate_image_to_video', }, }, required: ['task_id'], }, }, { name: 'extend_video', description: 'Extend a video by 4-5 seconds using Kling AI. This feature allows you to continue a video beyond its original ending, generating new content that seamlessly follows from the last frame. Perfect for creating longer sequences or adding additional scenes to existing videos.', inputSchema: { type: 'object', properties: { task_id: { type: 'string', description: 'The task ID of the original video to extend (from a previous generation)', }, prompt: { type: 'string', description: 'Text prompt describing how to extend the video (what should happen next)', }, model_name: { type: 'string', enum: ['kling-v1', 'kling-v1.5', 'kling-v1.6', 'kling-v2-master'], description: 'Model version to use for extension (default: kling-v2-master)', }, duration: { type: 'string', enum: ['5'], description: 'Extension duration (fixed at 5 seconds)', }, mode: { type: 'string', enum: ['standard', 'professional'], description: 'Video generation mode (default: standard)', }, }, required: ['task_id', 'prompt'], }, }, { name: 'create_lipsync', description: 'Create a lip-sync video by synchronizing mouth movements with audio. Supports both text-to-speech (TTS) with various voice options or custom audio upload. The original video must contain a clear, steady human face with visible mouth. Works with real, 3D, or 2D human characters (not animals). Video length limited to 10 seconds.', inputSchema: { type: 'object', properties: { video_url: { type: 'string', description: 'URL of the video to apply lip-sync to (must contain clear human face)', }, audio_url: { type: 'string', description: 'URL of custom audio file (mp3, wav, flac, ogg; max 20MB, 60s). If provided, TTS parameters are ignored', }, tts_text: { type: 'string', description: 'Text for text-to-speech synthesis (used only if audio_url is not provided)', }, tts_voice: { type: 'string', enum: ['male-warm', 'male-energetic', 'female-gentle', 'female-professional', 'male-deep', 'female-cheerful', 'male-calm', 'female-youthful'], description: 'Voice style for TTS (default: male-warm). Includes Chinese and English voice options', }, tts_speed: { type: 'number', description: 'Speech speed for TTS (0.5-2.0, default: 1.0)', minimum: 0.5, maximum: 2.0, }, model_name: { type: 'string', enum: ['kling-v1', 'kling-v1.5', 'kling-v1.6', 'kling-v2-master'], description: 'Model version to use (default: kling-v2-master)', }, }, required: ['video_url'], }, }, { name: 'apply_video_effect', description: 'Apply pre-defined animation effects to static images using Kling AI. Create emotionally expressive videos from portraits with effects like hugging, kissing, or playful animations. Dual-character effects (hug, kiss, heart_gesture) require exactly 2 images. Single-image effects (squish, expansion, fuzzyfuzzy, bloombloom, dizzydizzy) require 1 image. Perfect for social media content and creative storytelling.', inputSchema: { type: 'object', properties: { image_urls: { type: 'array', items: { type: 'string', }, description: 'Array of image URLs. Use 2 images for hug/kiss/heart_gesture effects, 1 image for squish/expansion/fuzzyfuzzy/bloombloom/dizzydizzy effects', }, effect_scene: { type: 'string', enum: ['hug', 'kiss', 'heart_gesture', 'squish', 'expansion', 'fuzzyfuzzy', 'bloombloom', 'dizzydizzy'], description: 'The animation effect to apply. Dual-character: hug, kiss, heart_gesture. Single-image: squish, expansion, fuzzyfuzzy, bloombloom, dizzydizzy', }, duration: { type: 'string', enum: ['5', '10'], description: 'Video duration in seconds (default: 5)', }, model_name: { type: 'string', enum: ['kling-v1', 'kling-v1.5', 'kling-v1.6', 'kling-v2-master'], description: 'Model version to use (default: kling-v2-master)', }, }, required: ['image_urls', 'effect_scene'], }, }, { name: 'generate_image', description: 'Generate images from text prompts using Kling AI. Create high-quality images with multiple aspect ratios and optional character reference support. Supports models v1, v1.5, and v2 with customizable parameters for creative control.', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'Text prompt describing the image to generate', }, negative_prompt: { type: 'string', description: 'Text describing what to avoid in the image (optional)', }, model_name: { type: 'string', enum: ['kling-v1', 'kling-v1.5', 'kling-v1.6', 'kling-v2-master'], description: 'Model version to use (default: kling-v2-master)', }, aspect_ratio: { type: 'string', enum: ['16:9', '9:16', '1:1', '4:3', '3:4', '2:3', '3:2'], description: 'Image aspect ratio (default: 1:1)', }, num_images: { type: 'number', description: 'Number of images to generate (default: 1)', minimum: 1, maximum: 4, }, ref_image_url: { type: 'string', description: 'Optional reference image URL for character consistency', }, ref_image_weight: { type: 'number', description: 'Weight of reference image influence (0-1, default: 0.5)', minimum: 0, maximum: 1, }, }, required: ['prompt'], }, }, { name: 'check_image_status', description: 'Check the status of an image generation task', inputSchema: { type: 'object', properties: { task_id: { type: 'string', description: 'The task ID returned from generate_image', }, }, required: ['task_id'], }, }, { name: 'virtual_try_on', description: 'Apply virtual clothing try-on to a person image using AI. Upload a person image and up to 5 clothing items to see how they would look wearing those clothes. Supports both single and multiple clothing combinations for complete outfit visualization.', inputSchema: { type: 'object', properties: { person_image_url: { type: 'string', description: 'URL of the person image to try clothes on', }, cloth_image_urls: { type: 'array', items: { type: 'string', }, description: 'Array of clothing image URLs (1-5 items). Multiple items will be combined into a complete outfit', minItems: 1, maxItems: 5, }, model_name: { type: 'string', enum: ['kolors-virtual-try-on-v1', 'kolors-virtual-try-on-v1.5'], description: 'Model version to use (default: kolors-virtual-try-on-v1.5)', }, }, required: ['person_image_url', 'cloth_image_urls'], }, }, { name: 'get_resource_packages', description: 'Get detailed information about your Kling AI resource packages including remaining credits, expiration dates, and package types. Useful for monitoring API usage and planning resource allocation.', inputSchema: { type: 'object', properties: {}, required: [], }, }, { name: 'get_account_balance', description: 'Check your Kling AI account balance and total available credits. Provides a comprehensive overview of your account status including total balance and breakdown by resource packages.', inputSchema: { type: 'object', properties: {}, required: [], }, }, { name: 'list_tasks', description: 'List all your Kling AI generation tasks with filtering options. View task history, check statuses, and filter by date range or status. Supports pagination for browsing through large task lists.', inputSchema: { type: 'object', properties: { page: { type: 'number', description: 'Page number for pagination (default: 1)', minimum: 1, }, page_size: { type: 'number', description: 'Number of tasks per page (default: 10, max: 100)', minimum: 1, maximum: 100, }, status: { type: 'string', enum: ['submitted', 'processing', 'succeed', 'failed'], description: 'Filter tasks by status', }, start_time: { type: 'string', description: 'Filter tasks created after this time (ISO 8601 format)', }, end_time: { type: 'string', description: 'Filter tasks created before this time (ISO 8601 format)', }, }, required: [], }, }, ];
  • The helper method in KlingClient that performs the actual API call to retrieve the video task status from Kling AI.
    async getTaskStatus(taskId: string): Promise<TaskStatus> { const path = `/v1/videos/text2video/${taskId}`; try { const response = await this.axiosInstance.get(path); return response.data.data; } catch (error) { if (axios.isAxiosError(error)) { throw new Error(`Kling API error: ${error.response?.data?.message || error.message}`); } throw error; } }
  • TypeScript interface defining the structure of the task status response used by the tool.
    export interface TaskStatus { task_id: string; task_status: 'submitted' | 'processing' | 'succeed' | 'failed'; task_status_msg?: string; created_at?: number; updated_at?: number; task_result?: { videos?: Array<{ id: string; url: string; duration: string; aspect_ratio: string; }>; }; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/199-mcp/mcp-kling'

If you have feedback or need assistance with the MCP directory API, please join our Discord server