analyze_video
Analyze video content by extracting key frames and answering questions with timestamp-grounded responses using vision-language AI.
Instructions
Analyze a video using Qwen3-VL vision-language model.
The video must be accessible via a public URL. The model will:
Download the video
Extract key frames (up to max_frames)
Analyze the frames with your question
Provide timestamp-grounded responses when applicable
Examples:
"What happens in this video?"
"Summarize the main events with timestamps"
"What products are shown?"
"At what timestamp does the speaker mention X?"
"What is being discussed or demonstrated?"
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| video_url | Yes | URL of the video to analyze (must be publicly accessible) | |
| question | No | Question or prompt about the video | Describe what happens in this video in detail. |
| max_frames | No | Maximum number of frames to extract (1-64) | |
| max_tokens | No | Maximum tokens in response |