Skip to main content
Glama

searchTranscripts

Query video transcript collections to retrieve timestamped segments ranked by relevance. Locate specific moments in imported content using semantic search across active or specified libraries.

Instructions

Search imported transcript-text collections with active-collection focus by default and return ranked timestamped chunks. [~instant]

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
collectionIdNo
maxResultsNo
minScoreNo
videoIdFilterNo
useActiveCollectionNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden and adds some behavioral context including the '[~instant]' performance marker and default active-collection behavior. However, it fails to confirm read-only safety status, error conditions, or ranking methodology.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single efficient sentence that is front-loaded with the core action. The '[~instant]' performance marker is somewhat cryptic but the structure avoids unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given six undocumented parameters and no output schema, the description is inadequate as it leaves most parameters unexplained and provides only minimal detail about the return structure ('ranked timestamped chunks') without clarifying the search syntax or result format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for its 6 parameters, yet the description only implicitly addresses the collection scoping logic (collectionId/useActiveCollection) without explaining query syntax, the purpose of minScore/videoIdFilter, or maxResults constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Search') and resource ('imported transcript-text collections'), and clarifies the return format ('ranked timestamped chunks'). It distinguishes scope with 'active-collection focus by default,' though it does not explicitly differentiate from sibling `readTranscript`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'active-collection focus by default,' implying a default scoping behavior, but provides no explicit guidance on when to use this tool versus `readTranscript` or `searchComments`. It lacks prerequisites such as requiring pre-imported collections.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/thatsrajan/vidlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server