Skip to main content
Glama
135,788 tools. Last updated 2026-05-17 06:53

"Prime Video" matching MCP tools:

  • Cut and assemble a clip from any prior video job (find_clips, summarize, or video transcribe). Operates on a parent job — possessing the parent `source_job_id` is the capability, no upload step. Pass one segment for a simple cut, or multiple non-contiguous segments to compose a single mp4 highlight reel — same flat $0.50 either way. Two-call flow: (1) call with `source_job_id` + `segments` (ordered array of `{start, end, label?}` in source seconds, total duration capped at 30 minutes) to receive {job_id, payment_challenge}; (2) pay via MPP and call with `job_id` + `payment_credential` to start processing. No upload step. Poll get_job_status(job_id) for completion; outputs are role `clip-video` (the assembled .mp4, frame-accurate boundaries with 15ms audio fades at segment joins) and — when `include_transcript: true` (default) — roles `clip-srt` + `clip-words` (transcripts stitched and time-shifted to match the assembled video). Set `include_transcript: false` to skip transcript outputs. Payment: MPP — accepts Tempo USDC and Stripe SPT. The challenge's WWW-Authenticate header and /.well-known/mpp.json are authoritative for which methods are offered. Source must still be in storage (72h TTL for find_clips parents, 24h elsewhere — check `expires_at` from get_job_status on the parent). Multiple extract_clip calls against one parent are independent paid jobs. Failed jobs auto-refund.
    Connector
  • Cut a 9:16 vertical clip from any prior video job (find_clips, summarize, or video transcribe), suitable for direct upload to TikTok, Instagram Reels, or YouTube Shorts. Default output is 1080×1920 H.264 / AAC `.mp4` with center-cropped framing; audio loudness-normalized to -14 LUFS / -1.5 dBTP for short-form social. Single-segment only; clip duration must be between 1 and 90 seconds (Instagram Reels max). Operates on a parent job — possessing the parent `source_job_id` is the capability, no upload step. Two-call flow: (1) call with `source_job_id` + `start` + `end` (in source seconds) to receive {job_id, payment_challenge}; (2) pay via MPP and call with `job_id` + `payment_credential` to start processing. Poll get_job_status(job_id) for completion; output is role `clip-vertical-video` (the `.mp4`). Flat price: $0.50 per clip. Payment: MPP — accepts Tempo USDC and Stripe SPT. Optional `profile` parameter selects the encoding profile (default `tiktok-primary`). Allowed values: `tiktok-primary` (1080×1920, fast preset, CRF 22), `tiktok-primary-720p` (720×1280, CBR 3 Mbps — half-resolution mobile-optimized, ~40% faster wall time), `instagram-reels` (1080×1920, slow preset, CBR 4 Mbps), `instagram-stories` (same encode shape as instagram-reels). All four profiles loudness-normalize identically. Source must be a horizontal video (wider than 9:16) — already-vertical or square sources are rejected. Source must still be in storage (72h TTL for find_clips parents, 24h elsewhere — check `expires_at` from get_job_status on the parent). Pair with `find_clips` ($2.00/video) to pick a moment first, then call this to get a download-ready vertical mp4 in under 5 minutes. Multiple extract_vertical_clip calls against one parent are independent paid jobs. Failed jobs auto-refund.
    Connector
  • Search podcasts (shows) or episodes from the open Podcast Index. Use when the user mentions a podcast, podcast host, audio show, or asks about a topic where podcast content adds value alongside video. type=podcast returns shows; type=episode returns recent episodes for the top-matching show and includes the RSS-declared transcript URL when the feed exposes one. Costs 1 credit.
    Connector
  • Generate a short video (5-10s) from a text prompt using BytePlus Seedance. Optionally accepts up to 12 image file IDs from the user's attached files (visible in the [ATTACHMENTS] block) as `reference_file_ids` for style and composition. Returns immediately with a job_id; the video is delivered back via continuation when the job completes (~30-90s for fast model, ~2-5min for pro). Reference images are temporarily re-hosted on a third-party CDN (imgbb) for the duration of generation and deleted on completion — don't submit confidential references. Gated behind a workspace opt-in flag.
    Connector
  • [SPEND: 5 USDC] Generate a short-form video from a prompt or URL. Costs 5 USDC (Base/Ethereum/Polygon/Solana via x402). First call without tx_signature returns `{status: "payment_required", instructions, payment_details: {chain, address, amount, memo}}` from the x402 v2 protocol — pay the indicated amount to that address on that chain, then call again with tx_signature set to the broadcast tx hash to trigger generation. Returns a session_id to poll with check_video_status. Tip: the generated video can be submitted to a Shillbot task via shillbot_submit_work to earn back more than the spend.
    Connector
  • Transcribe audio or video to text, including per-word timestamps for precise editing. Three-call flow: (1) call with `filename` to receive {job_id, payment_challenge}; (2) pay via MPP, then call with `job_id` + `payment_credential` to receive {upload_url} (presigned PUT, 1h expiry); (3) PUT the bytes, then complete_upload(job_id), then poll get_job_status(job_id). On completion, get_job_status returns presigned download URLs for two files: role `transcript` (SRT) and role `transcript-words` (JSON matching /.well-known/weftly-transcript-v2.schema.json, with segment-level and per-word timestamps). For other formats, pass `format=srt|txt|vtt|json|words` to get_job_status to receive content inline — `txt` and `vtt` are derived from SRT, `json` is v1 (segments only), `words` is v2 (segments + words). Flat price: audio $0.50, video $1.00 — see /.well-known/mpp.json for the authoritative table. Use for podcasts, interviews, meetings, lectures, and especially for creating clips, multicamera edits, or edit-video-from-transcript where word boundaries matter. Retrying any call with `job_id` alone returns current state (idempotent). Failed jobs auto-refund.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Create and manage cinematic AI video renders through the Future Video Studio Agent API.

  • Search your Flashback video library with natural language to instantly find relevant moments. Get…

  • Buy a Studio subscription for $25 USDC (30 days). Requires authentication. This endpoint returns HTTP 402 with x402 payment instructions. Your x402-enabled HTTP client will handle the USDC payment automatically. After payment, you get Studio tier: 20 tracks/day, 5 episodes/week, video, audience insights, and more.
    Connector
  • START HERE for any clip workflow on a video — `find_clips` is the canonical entry point and includes a full transcription as a free byproduct. **Do not call `transcribe` first**: doing so doubles the upload, doubles the spend, and produces the same transcript. Identify ranked candidate clips in a video — what to cut for highlights, social, or testimonials. Three-call flow: (1) call with `filename` (and optional `query`) to receive {job_id, payment_challenge}; (2) pay via MPP, then call with `job_id` + `payment_credential` to receive {upload_url} (presigned PUT, 1h expiry); (3) PUT the bytes, then complete_upload(job_id), then poll get_job_status(job_id). On completion, get_job_status returns presigned download URLs for three files: role `clip-candidates` (JSON matching /.well-known/weftly-clips-v1.schema.json — includes `source_job_id` and `source_expires_at`), role `transcript` (SRT, free byproduct), role `transcript-words` (JSON matching /.well-known/weftly-transcript-v2.schema.json, free byproduct). Each candidate carries `transcript_text` — the full text of what's in the clip — so callers can preview content before paying for extract_clip. Optional `query` parameter switches to query mode (e.g., "they discuss pricing", "the part about hiring") with the same output shape; the `mode` field in clip-candidates.json indicates which mode produced the result. Flat price: $2.00 video — see /.well-known/mpp.json. **Source-reuse contract:** the source video stays in storage for 72h after find_clips completes. Hand the find_clips `job_id` (also returned as `source_job_id` in the candidates JSON) to `extract_clip` or `extract_vertical_clip` as their `source_job_id` — within those 72h they cut directly from the stored source: no re-upload, no re-transcribe, just $0.50 per cut. Pass the same `source_job_id` to as many extract calls as you need. Use for interviews, podcasts, sales calls, all-hands recordings. Retrying with `job_id` alone returns current state. Failed jobs auto-refund.
    Connector
  • Permanently delete a YouTube video by id (or 'youtube:video:<id>'). Cannot be undone. Costs 50 quota units. Caller must own the channel.
    Connector
  • Ask a question about one or more videos with visual analysis. Most effective on focused time ranges — use start/end to specify the segment to analyze. BEFORE calling this tool, read the reka://docs/guide resource for recommended workflows. In most cases, you should first: - search_videos to find WHEN something happens, then pass those timestamps here as start/end - segment_video to detect and locate specific objects - get_transcript to read what was said For single-video questions, pass video_id with start/end. For cross-video questions, pass videos — a list of video references with start/end each. For follow-up questions, pass conversation_id from the previous response. You can add start/end to drill into a specific moment while keeping the conversation context. Requires qa_only or full pipeline.
    Connector
  • [EARN: SOL] Submit completed work for a claimed Shillbot task. Provide the content_id (YouTube video ID, tweet ID, game session ID, etc.). Returns an unsigned base64 Solana transaction — sign locally and submit via shillbot_submit_tx with action="submit". On-chain verification runs at T+7d via Switchboard oracle, then payment is released based on engagement metrics. Optional `network`: 'mainnet' (default) or 'devnet'.
    Connector
  • Start an AI transcription (Whisper) of a YouTube video. Use when the video has no captions, when fetch_transcript returned NO_CAPTIONS, or when the user explicitly wants an AI transcript. ASYNC — returns task_id + estimated_wait_seconds. Tell the user how long it will take, then call get_asr_task to check status. Do not poll faster than next_poll_after_seconds. Costs 5 credits on completion.
    Connector
  • Summarize an audio or video file — returns both a text summary AND the full transcript (with per-word timestamps). Do not also call transcribe on the same file. Three-call flow: (1) call with `filename` to receive {job_id, payment_challenge}; (2) pay via MPP, then call with `job_id` + `payment_credential` to receive {upload_url} (presigned PUT, 1h expiry); (3) PUT the bytes, then complete_upload(job_id), then poll get_job_status(job_id). On completion, get_job_status returns presigned download URLs for three files: role `summary` (plain text), role `transcript` (SRT), and role `transcript-words` (JSON matching /.well-known/weftly-transcript-v2.schema.json, with segment-level and per-word timestamps). For other formats, pass `format=srt|txt|vtt|json|words` to get_job_status to receive transcript content inline — `txt` and `vtt` are derived from SRT, `json` is v1 (segments only), `words` is v2 (segments + words). Flat price: audio $0.75, video $1.25 — see /.well-known/mpp.json for the authoritative table. Use for meetings, long-form interviews, lectures, and podcast episodes; the `words` output additionally supports creating clips, multicamera edits, or edit-video-from-transcript. Retrying any call with `job_id` alone returns current state (idempotent). Failed jobs auto-refund.
    Connector
  • Generate a short video (5-10s) from a text prompt using BytePlus Seedance. Optionally accepts up to 12 image file IDs from the user's attached files (visible in the [ATTACHMENTS] block) as `reference_file_ids` for style and composition. Returns immediately with a job_id; the video is delivered back via continuation when the job completes (~30-90s for fast model, ~2-5min for pro). Reference images are temporarily re-hosted on a third-party CDN (imgbb) for the duration of generation and deleted on completion — don't submit confidential references. Gated behind a workspace opt-in flag.
    Connector
  • Generate direct-response video ad scripts from a proven structure plus a brand's PowerSource. Output is direct-response video ad copy for paid social (Meta, TikTok, Reels) in the brand's voice, with a hook, beat-by-beat body, and CTA close. Pass source_id (from adformula_intelligence, decoder_intelligence, or decode_ad) plus source_type and a powersource_id (job_id or brief_id from create_powersource_*). script_mode: "blueprint" preserves the source structure exactly; "remix" keeps the psychological architecture but writes original copy. Generate 1-5 variants per call (tensions and selling points auto-rotated across variants). Metered pricing — typically 2-5 credits per script depending on length (~2 credits for a 15s script, ~5 credits for a 60s script). Pre-flight reserves a 17-credit ceiling and refunds the difference once actual usage is measured.
    Connector
  • Permanently delete a YouTube video by id (or 'youtube:video:<id>'). Cannot be undone. Costs 50 quota units. Caller must own the channel.
    Connector
  • Queue a new video render for an existing quiz. Returns the render sessionId; poll quiz_video_get_render until its status is "completed" (typically 1-5 minutes), then call quiz_video_download_render to obtain the signed MP4 URL. The quiz itself is viewable immediately at /quiz/{slug}/ regardless of render status.
    Connector
  • Fetch metadata about a YouTube video WITHOUT downloading it. Returns title, channel name, duration, view count, upload date, thumbnail URL, full video description, available video qualities, and the YouTube license type (Standard YouTube License vs Creative Commons). Use this tool when the user says things like: - "what is this YouTube video about" / "summarize this video" - "how long is this video" / "when was this uploaded" - "who made this video" / "what channel is this from" - "is this Creative Commons" / "can I reuse this" / "what is the license" - "what qualities are available for this video" Do NOT use this tool when: - The user wants to download, save, rip, extract, or convert the video — use download_video for that. Free to call — does not count against the user's download quota. Call this before download_video when you need to confirm the video exists, pick the right quality, or check licensing before downloading.
    Connector
  • Submit a video generation task Submit an asynchronous video generation task. Starts a Seedance generation job — text-to-video, image-to-video, or video-to-video depending on `content` — and returns a `taskId` immediately; the video is produced in the background. Poll `GET /openapi/v2/model/video/tasks/{task_id}` with the returned id until `status` is terminal to obtain the video URL. Available to CONTRACT-tier API keys only. The task cost is charged in USD from the account wallet on the first successful poll, not at submit time. At submit time the wallet balance is checked against an upper-bound cost estimate; an insufficient balance returns 402 with `X-Usd-Required-*` headers and no task is created. Request body: - `model` (string, required): `seedance-2.0` or `seedance-2.0-fast`. - `content` (array, required, >= 1 item): generation inputs as an OpenAI-style content array. Must include at least one text item `{"type": "text", "text": "<prompt>"}`. May also include reference media items such as `{"type": "image_url", "image_url": {"url": "https://..."}, "role": "reference_image"}` (and likewise `video_url` / `audio_url`), capped at 9 image, 3 video, and 3 audio items. `role` is one of `first_frame`, `last_frame`, `reference_image`, `reference_video`, `reference_audio`. Reference URLs must be publicly reachable. - `resolution` (string, optional, default `720p`): `480p`, `720p`, or `1080p`. `1080p` is not supported by `seedance-2.0-fast`. Also the billing tier. - `duration` (integer, required): output length in seconds, 4-15. - `ratio` (string, optional): output aspect ratio — `21:9`, `16:9`, `4:3`, `1:1`, `3:4`, `9:16`, or `adaptive`. - `generate_audio` (boolean, optional): generate an audio track. - `watermark` (boolean, optional): overlay the provider watermark. - `service_tier` (string, optional): `flex` for cheaper offline inference. - `return_last_frame` (boolean, optional): also return the video's last frame on `output`. Any further unrecognized top-level fields are forwarded to the generation provider unchanged. Response `data`: - `taskId` (string): identifier to poll, format `task_video_<id>`. - `status` (string): always `pending` immediately after submit. ### Responses: **200**: Successful Response (Success Response) Content-Type: application/json **Example Response:** ```json { "success": true, "meta": { "requestId": "Requestid", "timestamp": "Timestamp" } } ``` **Output Schema:** ```json { "properties": { "success": { "type": "boolean", "title": "Success", "description": "Whether the request was successful", "default": true }, "data": { "description": "Response data payload" }, "error": { "description": "Error details if request failed" }, "meta": { "description": "Metadata for API responses.\n\nCredit fields follow the ADR-0003 parallel-fields strategy (Option 3):\n- `credits_remaining` / `credits_consumed` (int): legacy fields, rounded\n to whole credits, kept for zero-breaking-change to existing SDK clients.\n- `credits_remaining_exact` / `credits_consumed_exact` (float): new\n precision-aware fields for clients that opt in to decimal credits.\n\nSee ADR-0003 decision 5 and the \u00a78 deprecation timeline.\n\nTODO(2026-11, ADR-0003 \u00a78 +6mo): mark `credits_remaining` /\n`credits_consumed` as `deprecated=True` in their Field() definitions\nand announce in customer changelog.\nTODO(2027-05, ADR-0003 \u00a78 +12mo): remove the legacy int fields via a\nmajor-version bump of the OpenAPI surface.", "properties": { "requestId": { "type": "string", "title": "Requestid", "description": "Unique request identifier" }, "timestamp": { "type": "string", "title": "Timestamp", "description": "Response timestamp in ISO 8601 format" }, "total": { "title": "Total", "description": "Total number of records" }, "page": { "title": "Page", "description": "Current page number" }, "pageSize": { "title": "Pagesize", "description": "Number of records per page" }, "totalPages": { "title": "Totalpages", "description": "Total number of pages" }, "creditsRemaining": { "title": "Creditsremaining", "description": "Remaining API credits (rounded to whole credits; see creditsRemainingExact for precise value)" }, "creditsConsumed": { "title": "Creditsconsumed", "description": "Credits consumed by this request (rounded; see creditsConsumedExact for precise value)" }, "creditsRemainingExact": { "title": "Creditsremainingexact", "description": "Remaining API credits, precise to 1 decimal place" }, "creditsConsumedExact": { "title": "Creditsconsumedexact", "description": "Credits consumed by this request, precise to 1 decimal place" }, "tokensUsage": { "description": "Provider token-usage block \u2014 populated on terminal video polls only, null on every non-video endpoint. See TokensUsage for its fields." } }, "type": "object", "required": [ "requestId", "timestamp" ], "title": "ResponseMeta" } }, "type": "object", "required": [ "meta" ], "title": "OpenApiResponse[VideoGenerationSubmitData]", "examples": [] } ``` **422**: Validation Error Content-Type: application/json **Example Response:** ```json { "detail": [ { "loc": [], "msg": "Message", "type": "Error Type", "ctx": {} } ] } ``` **Output Schema:** ```json { "properties": { "detail": { "items": { "properties": { "loc": { "items": {}, "type": "array", "title": "Location" }, "msg": { "type": "string", "title": "Message" }, "type": { "type": "string", "title": "Error Type" }, "input": { "title": "Input" }, "ctx": { "type": "object", "title": "Context" } }, "type": "object", "required": [ "loc", "msg", "type" ], "title": "ValidationError" }, "type": "array", "title": "Detail" } }, "type": "object", "title": "HTTPValidationError" } ```
    Connector
  • Fetch consolidated YouTube video metadata with numeric types — duration_seconds (int), view_count (int64), published_at (RFC3339). Use when you need exact numbers for sorting/analytics instead of YouTube's display strings ('1.2M views', '2 weeks ago'). Title, description, channel_id, channel_title, thumbnail, and is_live are included too. Costs 1 credit.
    Connector