Skip to main content
Glama
Ownership verified

Server Details

An MCP server that fetches video transcripts/subtitles, with pagination for large responses. Supports YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit. Whisper fallback — transcribes audio when subtitles are unavailable.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct, but the quartet of transcript/subtitle fetching tools (get_transcript, get_raw_subtitles, get_playlist_transcripts, get_available_subtitles) requires careful reading to distinguish between raw vs cleaned formats and single vs playlist scope. Boundaries are clear from descriptions but naming similarity could cause hesitation.

Naming Consistency4/5

Follows consistent verb_noun snake_case pattern throughout (get_, search_). Minor terminology inconsistency: 'subtitles' vs 'transcripts' used interchangeably for similar resources (though descriptions clarify raw vs cleaned intent). All seven tools follow predictable structure.

Tool Count5/5

Seven tools is ideal for this focused domain. Covers the complete workflow: search (search_videos), discovery (get_video_info, get_available_subtitles, get_video_chapters), and extraction (get_transcript, get_raw_subtitles, get_playlist_transcripts). No bloat, no obvious missing operations for a read-only transcription server.

Completeness4/5

Excellent coverage for read-only transcription extraction across multiple platforms. Supports single videos and playlists, raw and cleaned formats, with pagination and language selection. Minor gap: no explicit 'list supported platforms' tool, though get_transcript description mentions them. No CRUD operations for creating/updating subtitles, but that appears intentional for a Transcriptor service.

Available Tools

7 tools
get_available_subtitlesGet available subtitle languagesA
Read-onlyIdempotent
Inspect

List available official and auto-generated subtitle languages.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesVideo URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID
formatNoSubtitle format (default from YT_DLP_SUB_FORMAT or srt)

Output Schema

ParametersJSON Schema
NameRequiredDescription
autoYes
videoIdYes
officialYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable behavioral context beyond annotations by specifying 'official and auto-generated' subtitle types, helping the agent understand the scope of discovery. Annotations cover safety profile (readOnly/idempotent), so description appropriately focuses on content scope rather than repeating safety properties.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb 'List', zero waste. 'official and auto-generated' earns its place by clarifying scope without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 2-parameter discovery tool with 100% schema coverage, existing output schema, and annotations present. Could marginally improve by noting this precedes download operations, but sufficient as-is.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with comprehensive descriptions for both url (platform list) and format (default behavior). Description doesn't add parameter semantics, but with complete schema documentation, baseline 3 is appropriate and sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'List' with resource 'subtitle languages'. The phrase 'available...languages' effectively distinguishes this discovery tool from siblings like get_transcript or get_raw_subtitles (which retrieve actual content) and get_video_info (which covers general metadata).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through 'available' (suggesting it's for checking what exists before downloading), but lacks explicit guidance like 'Use before get_transcript to check supported languages' or warnings about when not to use alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_playlist_transcriptsGet playlist transcriptsA
Read-onlyIdempotent
Inspect

Fetch cleaned subtitles (plain text) for multiple videos from a playlist. Use playlistItems (e.g. "1:5") to select specific items, maxItems to limit count.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesPlaylist URL (e.g. youtube.com/playlist?list=XXX) or watch URL with list= parameter
langNoLanguage code (e.g. en, ru). Default: en
typeNoSubtitle track type: official or auto-generated (default: auto)
formatNoSubtitle format (default from YT_DLP_SUB_FORMAT or srt)
maxItemsNoMax number of videos to fetch (yt-dlp --max-downloads)
playlistItemsNoyt-dlp -I spec: "1:5", "1,3,7", "-1" for last, "1:10:2" for every 2nd

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultsYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations disclose read-only/idempotent safety. Description adds valuable processing context ('cleaned', 'plain text') that distinguishes output from raw subtitle formats. However, fails to disclose error behavior on large playlists, partial failures, or what 'cleaning' specifically removes (timestamps/speakers).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly-constructed sentences. First establishes purpose/scope, second gives practical usage guidance. Zero redundancy. Every word earns its place—brief yet informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given excellent schema coverage (100%), detailed parameter docs, annotations present, and output schema exists, the description appropriately avoids repetition. Would benefit from brief note on handling large playlist processing constraints, but otherwise adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed field descriptions. Description briefly references playlistItems and maxItems usage, adding minimal semantic value beyond what the schema already documents explicitly (including yt-dlp syntax in schema). Baseline score appropriate for comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent 'specific verb+resource'. 'Fetch cleaned subtitles (plain text)' clearly defines both the action and output format. Explicitly 'multiple videos from a playlist' distinguishes from sibling get_transcript (single video) and get_raw_subtitles (unprocessed) by emphasizing scope and processing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides specific parameter usage examples ('1:5' syntax for playlistItems), giving practical guidance for filtering. However, lacks explicit when-to-use guidance versus siblings (e.g., no statement like 'For single videos, use get_transcript instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_raw_subtitlesGet raw video subtitlesB
Read-onlyIdempotent
Inspect

Fetch raw SRT/VTT subtitles for a video (supported platforms). Optional: type, lang, response_limit (when omitted returns full content), next_cursor for pagination.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesVideo URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID
langNoLanguage code (e.g. en, es). When omitted with Whisper fallback, language is auto-detected
typeNoSubtitle track type: official or auto-generated
formatNoSubtitle format (default from YT_DLP_SUB_FORMAT or srt)
next_cursorNoOpaque cursor from previous response for pagination
response_limitNoMax characters per response. When omitted, returns full content. When set: min 1000

Output Schema

ParametersJSON Schema
NameRequiredDescription
langYes
typeYes
formatYes
sourceNo
contentYes
videoIdYes
end_offsetYes
next_cursorNo
is_truncatedYes
start_offsetYes
total_lengthYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With annotations declaring readOnlyHint and idempotentHint, the safety profile is covered. The description adds valuable behavioral context about pagination (next_cursor) and response limiting behavior ('when omitted returns full content'), though it omits details about rate limits, Whisper fallback behavior, or typical response sizes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with the core purpose front-loaded, followed by optional parameter hints. It avoids redundancy with the schema while adding value. The parenthetical 'supported platforms' is slightly vague since platforms are detailed in the schema, but overall information density is high.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (which handles return value documentation), 100% parameter coverage, and annotations covering safety hints, the description appropriately focuses on operational behaviors like pagination. It sufficiently covers the tool's complexity without unnecessary verbosity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage, the description adds meaningful context beyond the schema by explaining the interaction between response_limit and full content retrieval, and clarifying the pagination cursor purpose. This helps the agent understand parameter dependencies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches 'raw SRT/VTT subtitles' with a specific verb and resource type. It implies distinction from siblings like get_transcript (which likely returns processed text) by emphasizing 'raw' and specific file formats, though it could more explicitly differentiate from get_available_subtitles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists optional parameters but provides no strategic guidance on when to use this tool versus siblings like get_transcript or get_available_subtitles. It does not mention prerequisites (like checking available subtitles first) or when raw formats are preferred over processed transcripts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transcriptGet video transcriptA
Read-onlyIdempotent
Inspect

Fetch cleaned subtitles as plain text for a video (YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit). Uses auto-discovery for type/language when omitted. Optional: type, lang, response_limit (when omitted returns full transcript), next_cursor for pagination.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesVideo URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID
langNoLanguage code (e.g. en, es). When omitted with Whisper fallback, language is auto-detected
typeNoSubtitle track type: official or auto-generated
formatNoSubtitle format (default from YT_DLP_SUB_FORMAT or srt)
next_cursorNoOpaque cursor from previous response for pagination
response_limitNoMax characters per response. When omitted, returns full content. When set: min 1000

Output Schema

ParametersJSON Schema
NameRequiredDescription
langYes
textYes
typeYes
sourceNo
videoIdYes
end_offsetYes
next_cursorNo
is_truncatedYes
start_offsetYes
total_lengthYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety. Description adds valuable behavioral context: 'cleaned' implies text processing/normalization, 'auto-discovery' documents fallback behavior for type/language, and explicit pagination semantics (response_limit/next_cursor). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two high-density sentences with zero redundancy. First sentence establishes purpose and scope; second sentence efficiently catalogs key optional parameters and their default behaviors. Front-loaded and efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete given existing output schema and comprehensive parameter coverage. Adequately addresses the tool's unique value (cleaning, auto-discovery, pagination). Could be improved by noting rate limits or error conditions, but sufficient for invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, description wisely focuses on adding behavioral semantics beyond the schema: documents auto-discovery logic for type/lang, explains 'when omitted returns full transcript' for response_limit, and clarifies pagination cursor usage. Adds meaningful context despite complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Fetch' with clear resource 'cleaned subtitles' and output format 'plain text'. Explicitly distinguishes from sibling get_raw_subtitles by emphasizing 'cleaned' processing and text output rather than raw subtitle formats. Comprehensive platform list defines scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit differentiation via 'cleaned/plain text' versus raw alternatives, and mentions pagination behavior (next_cursor). However, lacks explicit comparison to siblings like get_raw_subtitles or get_available_subtitles, and doesn't clarify when to check available languages first versus fetching directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_video_chaptersGet video chaptersB
Read-onlyIdempotent
Inspect

Fetch chapter markers (start/end time, title) for a video.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesVideo URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID
formatNoSubtitle format (default from YT_DLP_SUB_FORMAT or srt)

Output Schema

ParametersJSON Schema
NameRequiredDescription
videoIdYes
chaptersYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover readOnlyHint and idempotentHint. The description adds value by clarifying the data structure of chapter markers (start/end time, title), which hints at the return format. However, it omits behavioral details like rate limits, platform-specific limitations, or the fact that not all videos have chapters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with action verb, immediately specifies the resource with parenthetical clarification of data structure. Every word earns its place; no redundancy with title or schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a focused read operation with 100% schema coverage and existing output schema. The description identifies the core value proposition (chapter markers) without needing to duplicate return value documentation. Minor gap: could clarify the format parameter's role in output formatting, though schema covers this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the url parameter documenting supported platforms and format documenting subtitle format options. The description does not add parameter syntax, examples, or clarify the relationship between 'chapters' and the 'subtitle format' parameter (srt/vtt), so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Fetch' and clearly identifies the resource as 'chapter markers' with structural details (start/end time, title). This distinguishes it from sibling transcript/subtitle tools. However, it does not explicitly differentiate from get_video_info which may also return chapter metadata, preventing a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like get_video_info or get_transcript. While 'chapter markers' implies the specific use case, there are no when-to-use exclusions, prerequisites (like requiring chapters to exist), or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_video_infoGet video infoB
Read-onlyIdempotent
Inspect

Fetch extended metadata for a video (title, channel, duration, tags, thumbnails, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesVideo URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID
formatNoSubtitle format (default from YT_DLP_SUB_FORMAT or srt)

Output Schema

ParametersJSON Schema
NameRequiredDescription
tagsYes
titleYes
isLiveYes
channelYes
videoIdYes
wasLiveYes
durationYes
uploaderYes
channelIdYes
likeCountYes
thumbnailYes
viewCountYes
categoriesYes
channelUrlYes
liveStatusYes
thumbnailsYes
uploadDateYes
uploaderIdYes
webpageUrlYes
descriptionYes
availabilityYes
commentCountYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, establishing safety. The description adds value by listing example metadata fields returned (title, channel, etc.), hinting at output structure. However, it fails to explain error handling, rate limits, or the anomalous presence of a subtitle format parameter in a metadata tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Fetch extended metadata') and uses parenthetical examples effectively. There is no redundant or wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While an output schema exists (reducing the need for detailed return value descriptions), the description fails to clarify why a 'subtitle format' parameter exists in a tool named 'get_video_info', creating ambiguity about whether subtitles are included in the metadata response. This gap leaves the tool's full scope unclear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'url' and 'format' parameters fully documented in the schema. The description adds no parameter-specific guidance, but with complete schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches 'extended metadata' with specific examples (title, channel, duration, tags, thumbnails), distinguishing it from transcript-focused siblings like get_transcript or get_raw_subtitles. However, it does not explicitly differentiate its scope from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., when to use get_video_info vs get_transcript or get_available_subtitles). There are no prerequisites, exclusions, or workflow hints provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_videosSearch videosA
Read-only
Inspect

Search videos on YouTube via yt-dlp (ytsearch). Returns list of matching videos with metadata. Optional: limit, offset (pagination), uploadDateFilter (hour|today|week|month|year), dateBefore, date, matchFilter (e.g. "!is_live"), response_format (json|markdown).

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoyt-dlp --date, exact date e.g. "20231215" or "today-2weeks"
limitNoMax results (default 10)
queryNoSearch query
offsetNoSkip first N results (pagination)
dateBeforeNoyt-dlp --datebefore, e.g. "now-1year" or "20241201"
matchFilterNoyt-dlp --match-filter, e.g. "!is_live" or "duration < 3600 & like_count > 100"
response_formatNoFormat of the human-readable content: json (default) or markdown
uploadDateFilterNoFilter by upload date (relative to now)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=false; the description adds valuable context by disclosing the yt-dlp implementation and stating it returns 'list of matching videos with metadata'. This helps set expectations about the data source and return format without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place: first clause defines action and mechanism, second states return value, third efficiently lists optional parameters with parenthetical examples/values. No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich input schema (100% coverage), presence of output schema, and annotations, the description appropriately focuses on high-level behavior. It could note that all parameters are optional (required: 0), but otherwise adequately covers the tool's scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by enumerating optional parameters with specific examples (e.g., '!is_live' for matchFilter, enum values for uploadDateFilter and response_format) in a concise summary format that complements the detailed schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with resource 'videos on YouTube' and explicitly mentions implementation via 'yt-dlp (ytsearch)'. Clearly distinguishes from sibling 'get_*' tools which retrieve specific video data versus this search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance provided. While the distinction from sibling tools is implied by the name and 'search' versus 'get' semantics, there is no explicit guidance such as 'use this to find video IDs before calling get_video_info'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources