Transcriptor
Server Details
An MCP server that fetches video transcripts/subtitles, with pagination for large responses. Supports YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit. Whisper fallback — transcribes audio when subtitles are unavailable.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 7 of 7 tools scored.
Tools are mostly distinct, but the quartet of transcript/subtitle fetching tools (get_transcript, get_raw_subtitles, get_playlist_transcripts, get_available_subtitles) requires careful reading to distinguish between raw vs cleaned formats and single vs playlist scope. Boundaries are clear from descriptions but naming similarity could cause hesitation.
Follows consistent verb_noun snake_case pattern throughout (get_, search_). Minor terminology inconsistency: 'subtitles' vs 'transcripts' used interchangeably for similar resources (though descriptions clarify raw vs cleaned intent). All seven tools follow predictable structure.
Seven tools is ideal for this focused domain. Covers the complete workflow: search (search_videos), discovery (get_video_info, get_available_subtitles, get_video_chapters), and extraction (get_transcript, get_raw_subtitles, get_playlist_transcripts). No bloat, no obvious missing operations for a read-only transcription server.
Excellent coverage for read-only transcription extraction across multiple platforms. Supports single videos and playlists, raw and cleaned formats, with pagination and language selection. Minor gap: no explicit 'list supported platforms' tool, though get_transcript description mentions them. No CRUD operations for creating/updating subtitles, but that appears intentional for a Transcriptor service.
Available Tools
7 toolsget_available_subtitlesGet available subtitle languagesARead-onlyIdempotentInspect
List available official and auto-generated subtitle languages.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Video URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID | |
| format | No | Subtitle format (default from YT_DLP_SUB_FORMAT or srt) |
Output Schema
| Name | Required | Description |
|---|---|---|
| auto | Yes | |
| videoId | Yes | |
| official | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context beyond annotations by specifying 'official and auto-generated' subtitle types, helping the agent understand the scope of discovery. Annotations cover safety profile (readOnly/idempotent), so description appropriately focuses on content scope rather than repeating safety properties.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action verb 'List', zero waste. 'official and auto-generated' earns its place by clarifying scope without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 2-parameter discovery tool with 100% schema coverage, existing output schema, and annotations present. Could marginally improve by noting this precedes download operations, but sufficient as-is.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with comprehensive descriptions for both url (platform list) and format (default behavior). Description doesn't add parameter semantics, but with complete schema documentation, baseline 3 is appropriate and sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'List' with resource 'subtitle languages'. The phrase 'available...languages' effectively distinguishes this discovery tool from siblings like get_transcript or get_raw_subtitles (which retrieve actual content) and get_video_info (which covers general metadata).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'available' (suggesting it's for checking what exists before downloading), but lacks explicit guidance like 'Use before get_transcript to check supported languages' or warnings about when not to use alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_playlist_transcriptsGet playlist transcriptsARead-onlyIdempotentInspect
Fetch cleaned subtitles (plain text) for multiple videos from a playlist. Use playlistItems (e.g. "1:5") to select specific items, maxItems to limit count.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Playlist URL (e.g. youtube.com/playlist?list=XXX) or watch URL with list= parameter | |
| lang | No | Language code (e.g. en, ru). Default: en | |
| type | No | Subtitle track type: official or auto-generated (default: auto) | |
| format | No | Subtitle format (default from YT_DLP_SUB_FORMAT or srt) | |
| maxItems | No | Max number of videos to fetch (yt-dlp --max-downloads) | |
| playlistItems | No | yt-dlp -I spec: "1:5", "1,3,7", "-1" for last, "1:10:2" for every 2nd |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations disclose read-only/idempotent safety. Description adds valuable processing context ('cleaned', 'plain text') that distinguishes output from raw subtitle formats. However, fails to disclose error behavior on large playlists, partial failures, or what 'cleaning' specifically removes (timestamps/speakers).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly-constructed sentences. First establishes purpose/scope, second gives practical usage guidance. Zero redundancy. Every word earns its place—brief yet informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given excellent schema coverage (100%), detailed parameter docs, annotations present, and output schema exists, the description appropriately avoids repetition. Would benefit from brief note on handling large playlist processing constraints, but otherwise adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed field descriptions. Description briefly references playlistItems and maxItems usage, adding minimal semantic value beyond what the schema already documents explicitly (including yt-dlp syntax in schema). Baseline score appropriate for comprehensive schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent 'specific verb+resource'. 'Fetch cleaned subtitles (plain text)' clearly defines both the action and output format. Explicitly 'multiple videos from a playlist' distinguishes from sibling get_transcript (single video) and get_raw_subtitles (unprocessed) by emphasizing scope and processing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific parameter usage examples ('1:5' syntax for playlistItems), giving practical guidance for filtering. However, lacks explicit when-to-use guidance versus siblings (e.g., no statement like 'For single videos, use get_transcript instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_raw_subtitlesGet raw video subtitlesBRead-onlyIdempotentInspect
Fetch raw SRT/VTT subtitles for a video (supported platforms). Optional: type, lang, response_limit (when omitted returns full content), next_cursor for pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Video URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID | |
| lang | No | Language code (e.g. en, es). When omitted with Whisper fallback, language is auto-detected | |
| type | No | Subtitle track type: official or auto-generated | |
| format | No | Subtitle format (default from YT_DLP_SUB_FORMAT or srt) | |
| next_cursor | No | Opaque cursor from previous response for pagination | |
| response_limit | No | Max characters per response. When omitted, returns full content. When set: min 1000 |
Output Schema
| Name | Required | Description |
|---|---|---|
| lang | Yes | |
| type | Yes | |
| format | Yes | |
| source | No | |
| content | Yes | |
| videoId | Yes | |
| end_offset | Yes | |
| next_cursor | No | |
| is_truncated | Yes | |
| start_offset | Yes | |
| total_length | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With annotations declaring readOnlyHint and idempotentHint, the safety profile is covered. The description adds valuable behavioral context about pagination (next_cursor) and response limiting behavior ('when omitted returns full content'), though it omits details about rate limits, Whisper fallback behavior, or typical response sizes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with the core purpose front-loaded, followed by optional parameter hints. It avoids redundancy with the schema while adding value. The parenthetical 'supported platforms' is slightly vague since platforms are detailed in the schema, but overall information density is high.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (which handles return value documentation), 100% parameter coverage, and annotations covering safety hints, the description appropriately focuses on operational behaviors like pagination. It sufficiently covers the tool's complexity without unnecessary verbosity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage, the description adds meaningful context beyond the schema by explaining the interaction between response_limit and full content retrieval, and clarifying the pagination cursor purpose. This helps the agent understand parameter dependencies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches 'raw SRT/VTT subtitles' with a specific verb and resource type. It implies distinction from siblings like get_transcript (which likely returns processed text) by emphasizing 'raw' and specific file formats, though it could more explicitly differentiate from get_available_subtitles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists optional parameters but provides no strategic guidance on when to use this tool versus siblings like get_transcript or get_available_subtitles. It does not mention prerequisites (like checking available subtitles first) or when raw formats are preferred over processed transcripts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_transcriptGet video transcriptARead-onlyIdempotentInspect
Fetch cleaned subtitles as plain text for a video (YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit). Uses auto-discovery for type/language when omitted. Optional: type, lang, response_limit (when omitted returns full transcript), next_cursor for pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Video URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID | |
| lang | No | Language code (e.g. en, es). When omitted with Whisper fallback, language is auto-detected | |
| type | No | Subtitle track type: official or auto-generated | |
| format | No | Subtitle format (default from YT_DLP_SUB_FORMAT or srt) | |
| next_cursor | No | Opaque cursor from previous response for pagination | |
| response_limit | No | Max characters per response. When omitted, returns full content. When set: min 1000 |
Output Schema
| Name | Required | Description |
|---|---|---|
| lang | Yes | |
| text | Yes | |
| type | Yes | |
| source | No | |
| videoId | Yes | |
| end_offset | Yes | |
| next_cursor | No | |
| is_truncated | Yes | |
| start_offset | Yes | |
| total_length | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only/idempotent safety. Description adds valuable behavioral context: 'cleaned' implies text processing/normalization, 'auto-discovery' documents fallback behavior for type/language, and explicit pagination semantics (response_limit/next_cursor). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two high-density sentences with zero redundancy. First sentence establishes purpose and scope; second sentence efficiently catalogs key optional parameters and their default behaviors. Front-loaded and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete given existing output schema and comprehensive parameter coverage. Adequately addresses the tool's unique value (cleaning, auto-discovery, pagination). Could be improved by noting rate limits or error conditions, but sufficient for invocation decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, description wisely focuses on adding behavioral semantics beyond the schema: documents auto-discovery logic for type/lang, explains 'when omitted returns full transcript' for response_limit, and clarifies pagination cursor usage. Adds meaningful context despite complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Fetch' with clear resource 'cleaned subtitles' and output format 'plain text'. Explicitly distinguishes from sibling get_raw_subtitles by emphasizing 'cleaned' processing and text output rather than raw subtitle formats. Comprehensive platform list defines scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit differentiation via 'cleaned/plain text' versus raw alternatives, and mentions pagination behavior (next_cursor). However, lacks explicit comparison to siblings like get_raw_subtitles or get_available_subtitles, and doesn't clarify when to check available languages first versus fetching directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_video_chaptersGet video chaptersBRead-onlyIdempotentInspect
Fetch chapter markers (start/end time, title) for a video.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Video URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID | |
| format | No | Subtitle format (default from YT_DLP_SUB_FORMAT or srt) |
Output Schema
| Name | Required | Description |
|---|---|---|
| videoId | Yes | |
| chapters | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover readOnlyHint and idempotentHint. The description adds value by clarifying the data structure of chapter markers (start/end time, title), which hints at the return format. However, it omits behavioral details like rate limits, platform-specific limitations, or the fact that not all videos have chapters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action verb, immediately specifies the resource with parenthetical clarification of data structure. Every word earns its place; no redundancy with title or schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a focused read operation with 100% schema coverage and existing output schema. The description identifies the core value proposition (chapter markers) without needing to duplicate return value documentation. Minor gap: could clarify the format parameter's role in output formatting, though schema covers this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the url parameter documenting supported platforms and format documenting subtitle format options. The description does not add parameter syntax, examples, or clarify the relationship between 'chapters' and the 'subtitle format' parameter (srt/vtt), so it meets the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Fetch' and clearly identifies the resource as 'chapter markers' with structural details (start/end time, title). This distinguishes it from sibling transcript/subtitle tools. However, it does not explicitly differentiate from get_video_info which may also return chapter metadata, preventing a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus siblings like get_video_info or get_transcript. While 'chapter markers' implies the specific use case, there are no when-to-use exclusions, prerequisites (like requiring chapters to exist), or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_video_infoGet video infoBRead-onlyIdempotentInspect
Fetch extended metadata for a video (title, channel, duration, tags, thumbnails, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Video URL (supported: YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion, Reddit) or YouTube video ID | |
| format | No | Subtitle format (default from YT_DLP_SUB_FORMAT or srt) |
Output Schema
| Name | Required | Description |
|---|---|---|
| tags | Yes | |
| title | Yes | |
| isLive | Yes | |
| channel | Yes | |
| videoId | Yes | |
| wasLive | Yes | |
| duration | Yes | |
| uploader | Yes | |
| channelId | Yes | |
| likeCount | Yes | |
| thumbnail | Yes | |
| viewCount | Yes | |
| categories | Yes | |
| channelUrl | Yes | |
| liveStatus | Yes | |
| thumbnails | Yes | |
| uploadDate | Yes | |
| uploaderId | Yes | |
| webpageUrl | Yes | |
| description | Yes | |
| availability | Yes | |
| commentCount | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, establishing safety. The description adds value by listing example metadata fields returned (title, channel, etc.), hinting at output structure. However, it fails to explain error handling, rate limits, or the anomalous presence of a subtitle format parameter in a metadata tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Fetch extended metadata') and uses parenthetical examples effectively. There is no redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While an output schema exists (reducing the need for detailed return value descriptions), the description fails to clarify why a 'subtitle format' parameter exists in a tool named 'get_video_info', creating ambiguity about whether subtitles are included in the metadata response. This gap leaves the tool's full scope unclear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'url' and 'format' parameters fully documented in the schema. The description adds no parameter-specific guidance, but with complete schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches 'extended metadata' with specific examples (title, channel, duration, tags, thumbnails), distinguishing it from transcript-focused siblings like get_transcript or get_raw_subtitles. However, it does not explicitly differentiate its scope from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., when to use get_video_info vs get_transcript or get_available_subtitles). There are no prerequisites, exclusions, or workflow hints provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_videosSearch videosARead-onlyInspect
Search videos on YouTube via yt-dlp (ytsearch). Returns list of matching videos with metadata. Optional: limit, offset (pagination), uploadDateFilter (hour|today|week|month|year), dateBefore, date, matchFilter (e.g. "!is_live"), response_format (json|markdown).
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | yt-dlp --date, exact date e.g. "20231215" or "today-2weeks" | |
| limit | No | Max results (default 10) | |
| query | No | Search query | |
| offset | No | Skip first N results (pagination) | |
| dateBefore | No | yt-dlp --datebefore, e.g. "now-1year" or "20241201" | |
| matchFilter | No | yt-dlp --match-filter, e.g. "!is_live" or "duration < 3600 & like_count > 100" | |
| response_format | No | Format of the human-readable content: json (default) or markdown | |
| uploadDateFilter | No | Filter by upload date (relative to now) |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=false; the description adds valuable context by disclosing the yt-dlp implementation and stating it returns 'list of matching videos with metadata'. This helps set expectations about the data source and return format without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence earns its place: first clause defines action and mechanism, second states return value, third efficiently lists optional parameters with parenthetical examples/values. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich input schema (100% coverage), presence of output schema, and annotations, the description appropriately focuses on high-level behavior. It could note that all parameters are optional (required: 0), but otherwise adequately covers the tool's scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds value by enumerating optional parameters with specific examples (e.g., '!is_live' for matchFilter, enum values for uploadDateFilter and response_format) in a concise summary format that complements the detailed schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Search' with resource 'videos on YouTube' and explicitly mentions implementation via 'yt-dlp (ytsearch)'. Clearly distinguishes from sibling 'get_*' tools which retrieve specific video data versus this search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance provided. While the distinction from sibling tools is implied by the name and 'search' versus 'get' semantics, there is no explicit guidance such as 'use this to find video IDs before calling get_video_info'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!