Skip to main content
Glama

indexVisualContent

Build a searchable visual index for videos by extracting frames and analyzing content with OCR and computer vision to extract text and enable semantic frame retrieval.

Instructions

Build a real visual index for a video using extracted frames, Apple Vision OCR, Apple Vision feature prints, and optional Gemini frame descriptions. Returns frame evidence with local image paths. [~30-120s, downloads + OCR + vision]

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
videoIdOrUrlYesVideo ID or URL to index visually
intervalSecNoFrame sampling interval in seconds (default 20)
maxFramesNoMaximum frames to analyze (default 12)
imageFormatNo
widthNo
autoDownloadNoAutomatically download a small local video copy if none exists (default true)
downloadFormatNoVideo format used if auto-download is needed (default worst_video)
forceReindexNoRe-run OCR/description analysis even if frames are already indexed
includeGeminiDescriptionsNoUse Gemini to describe each frame when a Gemini key is configured
includeGeminiEmbeddingsNoGenerate Gemini embeddings over OCR/description text for semantic retrieval (default true when Gemini key is available)
dryRunNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries significant weight. It successfully discloses duration (~30-120s), computational cost (downloads + OCR + vision), and return format (frame evidence with local image paths). However, it omits persistence details—whether the index is stored permanently, if it can be queried later, or implications of `forceReindex`.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient: two sentences deliver purpose, return value, and timing. The bracketed duration note is high-signal. No redundancy or tautology; every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an 11-parameter mutation tool with no output schema, the description covers the basics but misses the critical relationship to `searchVisualContent` (likely dependent on this). It should explicitly state that this creates a persistent searchable index or clarify one-time vs. cached behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 73%, establishing a baseline of 3. The description mentions 'optional Gemini frame descriptions' and 'downloads,' which loosely map to parameters, but adds minimal semantic detail beyond the schema's own descriptions. It notably fails to explain the `dryRun` parameter (undocumented in schema).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific technical verbs ('Build') and resources ('visual index') that clearly distinguish this from siblings like `extractKeyframes` (extraction-only) and `searchVisualContent` (query-only). It specifies the exact technologies employed (Apple Vision OCR, feature prints, Gemini), leaving no ambiguity about the tool's scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the timing bracket [~30-120s] hints at cost, the description lacks explicit guidance on when to use this versus `searchVisualContent` (which presumably requires this index) or `extractKeyframes`. It doesn't state prerequisites (e.g., 'use this before searching') or when to avoid re-indexing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thatsrajan/vidlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server